I0321 23:20:43.620814 7 e2e.go:129] Starting e2e run "15868e10-8b7e-4bfb-9e34-eb41f461b339" on Ginkgo node 1 {"msg":"Test Suite starting","total":330,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616368841 - Will randomize all specs Will run 330 of 5737 specs Mar 21 23:20:43.693: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:20:43.696: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 23:20:43.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:20:43.877: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 23:20:43.877: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:20:43.877: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 23:20:43.887: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 23:20:43.887: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 23:20:43.887: INFO: e2e test version: v1.21.0-beta.1 Mar 21 23:20:43.887: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 21 23:20:43.887: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:20:43.904: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:20:43.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Mar 21 23:20:44.166: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:20:44.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6779" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":330,"completed":1,"skipped":15,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:20:44.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 21 23:20:44.572: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:21:45.122: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:45.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:21:48.326: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Mar 21 23:21:48.922: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:21:51.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-9351" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:21:57.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4463" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.746 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":330,"completed":2,"skipped":18,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:22:07.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 21 23:22:09.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a" in namespace "projected-7811" to be "Succeeded or Failed" Mar 21 23:22:09.671: INFO: Pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a": Phase="Pending", Reason="", readiness=false. Elapsed: 491.480136ms Mar 21 23:22:13.017: INFO: Pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.837122302s Mar 21 23:22:15.468: INFO: Pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288555003s Mar 21 23:22:17.704: INFO: Pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a": Phase="Running", Reason="", readiness=true. Elapsed: 8.524501567s Mar 21 23:22:20.653: INFO: Pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.473018732s STEP: Saw pod success Mar 21 23:22:20.653: INFO: Pod "downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a" satisfied condition "Succeeded or Failed" Mar 21 23:22:21.528: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a container client-container: STEP: delete the pod Mar 21 23:22:24.198: INFO: Waiting for pod downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a to disappear Mar 21 23:22:24.485: INFO: Pod downwardapi-volume-a17c107c-5372-46e8-8ba0-da8238f0878a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:22:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7811" for this suite. • [SLOW TEST:18.479 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":330,"completed":3,"skipped":49,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:22:25.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 21 23:22:27.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f" in namespace "downward-api-9520" to be "Succeeded or Failed" Mar 21 23:22:28.047: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Pending", Reason="", readiness=false. Elapsed: 229.802807ms Mar 21 23:22:30.159: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.342051765s Mar 21 23:22:32.243: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42513556s Mar 21 23:22:34.641: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.823693934s Mar 21 23:22:37.147: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.329162434s Mar 21 23:22:39.530: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Running", Reason="", readiness=true. Elapsed: 11.712548503s Mar 21 23:22:41.594: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Running", Reason="", readiness=true. Elapsed: 13.776418075s Mar 21 23:22:43.990: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Running", Reason="", readiness=true. Elapsed: 16.172640653s Mar 21 23:22:46.100: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.282650994s STEP: Saw pod success Mar 21 23:22:46.100: INFO: Pod "downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f" satisfied condition "Succeeded or Failed" Mar 21 23:22:46.219: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f container client-container: STEP: delete the pod Mar 21 23:22:46.974: INFO: Waiting for pod downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f to disappear Mar 21 23:22:47.249: INFO: Pod downwardapi-volume-b944bbad-1fa6-490e-9529-7bbb5f42284f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:22:47.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9520" for this suite. • [SLOW TEST:21.838 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":330,"completed":4,"skipped":50,"failed":0} [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:22:47.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7908 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Mar 21 23:22:48.640: INFO: Found 0 stateful pods, waiting for 3 Mar 21 23:22:58.796: INFO: Found 2 stateful pods, waiting for 3 Mar 21 23:23:08.710: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:23:08.710: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:23:08.710: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 21 23:23:18.906: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:23:18.906: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:23:18.906: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:23:19.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-7908 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 23:23:25.061: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 21 23:23:25.061: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 23:23:25.061: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Mar 21 23:23:35.351: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 21 23:23:45.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-7908 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 23:23:45.820: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 21 23:23:45.820: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 23:23:45.820: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 23:23:56.198: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:23:56.198: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:23:56.198: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:23:56.198: INFO: Waiting for Pod statefulset-7908/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:06.542: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:24:06.542: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:06.542: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:06.542: INFO: Waiting for Pod statefulset-7908/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:16.539: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:24:16.539: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:16.539: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:16.539: INFO: Waiting for Pod statefulset-7908/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:26.697: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:24:26.697: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:26.697: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:26.697: INFO: Waiting for Pod statefulset-7908/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:36.445: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:24:36.445: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:36.445: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:46.345: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:24:46.345: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:46.345: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:56.315: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:24:56.315: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:24:56.315: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:08.117: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:25:08.118: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:08.118: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:16.323: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:25:16.323: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:16.323: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:26.291: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:25:26.291: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:26.291: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:36.445: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:25:36.445: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:36.445: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:46.347: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:25:46.347: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:46.347: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:25:56.738: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:25:56.739: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:26:06.307: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:26:06.307: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:26:16.212: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:26:16.212: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:26:26.746: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:26:26.746: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:26:36.234: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:26:36.234: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:26:46.262: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update STEP: Rolling back to a previous revision Mar 21 23:26:56.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-7908 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 23:26:56.901: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 21 23:26:56.901: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 23:26:56.901: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 23:27:07.811: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 21 23:27:19.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-7908 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 23:27:19.768: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 21 23:27:19.768: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 23:27:19.768: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 23:27:30.378: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:27:30.378: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:27:30.378: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:27:30.378: INFO: Waiting for Pod statefulset-7908/ss2-2 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:27:40.927: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:27:40.928: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:27:40.928: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:27:50.485: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:27:50.485: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:27:50.485: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:00.417: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:28:00.417: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:00.417: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:11.077: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:28:11.077: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:11.077: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:20.529: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:28:20.529: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:20.529: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:30.439: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:28:30.439: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:30.439: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:40.491: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:28:40.491: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:40.491: INFO: Waiting for Pod statefulset-7908/ss2-1 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:28:50.762: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:28:50.763: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:29:00.701: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:29:00.701: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:29:10.539: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:29:10.539: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:29:20.523: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:29:20.523: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:29:30.518: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:29:30.518: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:29:40.637: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:29:40.637: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:29:50.465: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:29:50.465: INFO: Waiting for Pod statefulset-7908/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 Mar 21 23:30:00.863: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:30:10.471: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:30:20.419: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:30:30.445: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:30:40.438: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update Mar 21 23:30:50.459: INFO: Waiting for StatefulSet statefulset-7908/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 21 23:31:00.432: INFO: Deleting all statefulset in ns statefulset-7908 Mar 21 23:31:00.510: INFO: Scaling statefulset ss2 to 0 Mar 21 23:33:00.594: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 23:33:00.660: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:01.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7908" for this suite. • [SLOW TEST:614.292 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":330,"completed":5,"skipped":50,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:01.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-612 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet Mar 21 23:33:03.548: INFO: Found 0 stateful pods, waiting for 3 Mar 21 23:33:13.598: INFO: Found 2 stateful pods, waiting for 3 Mar 21 23:33:24.158: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:33:24.158: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:33:24.158: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 21 23:33:34.350: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:33:34.350: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:33:34.350: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Mar 21 23:33:36.318: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 21 23:33:48.353: INFO: Updating stateful set ss2 Mar 21 23:33:49.116: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:34:00.191: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:34:09.688: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:34:20.191: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:34:31.020: INFO: Waiting for Pod statefulset-612/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted Mar 21 23:34:46.084: INFO: Found 2 stateful pods, waiting for 3 Mar 21 23:34:57.637: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:34:57.637: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:34:57.637: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 21 23:35:06.359: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:35:06.359: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:35:06.359: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 21 23:35:09.116: INFO: Updating stateful set ss2 Mar 21 23:35:10.273: INFO: Waiting for Pod statefulset-612/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:35:21.854: INFO: Waiting for Pod statefulset-612/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:35:30.362: INFO: Waiting for Pod statefulset-612/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:35:40.437: INFO: Updating stateful set ss2 Mar 21 23:35:40.658: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update Mar 21 23:35:40.658: INFO: Waiting for Pod statefulset-612/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:35:50.959: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update Mar 21 23:35:50.960: INFO: Waiting for Pod statefulset-612/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:36:00.687: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update Mar 21 23:36:00.687: INFO: Waiting for Pod statefulset-612/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:36:11.151: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update Mar 21 23:36:11.151: INFO: Waiting for Pod statefulset-612/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Mar 21 23:36:21.715: INFO: Waiting for StatefulSet statefulset-612/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 21 23:36:31.269: INFO: Deleting all statefulset in ns statefulset-612 Mar 21 23:36:31.553: INFO: Scaling statefulset ss2 to 0 Mar 21 23:39:41.840: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 23:39:41.886: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:42.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-612" for this suite. • [SLOW TEST:400.717 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":330,"completed":6,"skipped":66,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:42.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:39:44.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4742" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":330,"completed":7,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:39:45.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components Mar 21 23:39:45.557: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Mar 21 23:39:45.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 create -f -' Mar 21 23:40:01.656: INFO: stderr: "" Mar 21 23:40:01.656: INFO: stdout: "service/agnhost-replica created\n" Mar 21 23:40:01.657: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Mar 21 23:40:01.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 create -f -' Mar 21 23:40:02.654: INFO: stderr: "" Mar 21 23:40:02.655: INFO: stdout: "service/agnhost-primary created\n" Mar 21 23:40:02.655: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 21 23:40:02.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 create -f -' Mar 21 23:40:03.159: INFO: stderr: "" Mar 21 23:40:03.159: INFO: stdout: "service/frontend created\n" Mar 21 23:40:03.159: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.28 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 21 23:40:03.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 create -f -' Mar 21 23:40:03.568: INFO: stderr: "" Mar 21 23:40:03.568: INFO: stdout: "deployment.apps/frontend created\n" Mar 21 23:40:03.569: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.28 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 21 23:40:03.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 create -f -' Mar 21 23:40:04.264: INFO: stderr: "" Mar 21 23:40:04.264: INFO: stdout: "deployment.apps/agnhost-primary created\n" Mar 21 23:40:04.264: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.28 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 21 23:40:04.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 create -f -' Mar 21 23:40:05.214: INFO: stderr: "" Mar 21 23:40:05.214: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Mar 21 23:40:05.214: INFO: Waiting for all frontend pods to be Running. Mar 21 23:40:20.266: INFO: Waiting for frontend to serve content. Mar 21 23:40:20.284: INFO: Trying to add a new entry to the guestbook. Mar 21 23:40:20.331: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 21 23:40:20.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 delete --grace-period=0 --force -f -' Mar 21 23:40:20.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:40:20.699: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Mar 21 23:40:20.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 delete --grace-period=0 --force -f -' Mar 21 23:40:21.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:40:21.217: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Mar 21 23:40:21.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 delete --grace-period=0 --force -f -' Mar 21 23:40:21.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:40:21.699: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 21 23:40:21.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 delete --grace-period=0 --force -f -' Mar 21 23:40:21.876: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:40:21.876: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 21 23:40:21.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 delete --grace-period=0 --force -f -' Mar 21 23:40:22.102: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:40:22.102: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Mar 21 23:40:22.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-439 delete --grace-period=0 --force -f -' Mar 21 23:40:23.036: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:40:23.037: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:40:23.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-439" for this suite. • [SLOW TEST:38.654 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":330,"completed":8,"skipped":122,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:40:23.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:40:27.219: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 21 23:40:27.462: INFO: Number of nodes with available pods: 0 Mar 21 23:40:27.462: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 21 23:40:28.331: INFO: Number of nodes with available pods: 0 Mar 21 23:40:28.331: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:29.441: INFO: Number of nodes with available pods: 0 Mar 21 23:40:29.441: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:30.414: INFO: Number of nodes with available pods: 0 Mar 21 23:40:30.414: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:31.419: INFO: Number of nodes with available pods: 0 Mar 21 23:40:31.419: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:32.465: INFO: Number of nodes with available pods: 0 Mar 21 23:40:32.465: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:33.477: INFO: Number of nodes with available pods: 0 Mar 21 23:40:33.477: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:34.592: INFO: Number of nodes with available pods: 1 Mar 21 23:40:34.592: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 21 23:40:34.873: INFO: Number of nodes with available pods: 1 Mar 21 23:40:34.873: INFO: Number of running nodes: 0, number of available pods: 1 Mar 21 23:40:35.998: INFO: Number of nodes with available pods: 0 Mar 21 23:40:35.998: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 21 23:40:36.130: INFO: Number of nodes with available pods: 0 Mar 21 23:40:36.130: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:37.172: INFO: Number of nodes with available pods: 0 Mar 21 23:40:37.172: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:38.172: INFO: Number of nodes with available pods: 0 Mar 21 23:40:38.172: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:39.202: INFO: Number of nodes with available pods: 0 Mar 21 23:40:39.202: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:40.156: INFO: Number of nodes with available pods: 0 Mar 21 23:40:40.156: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:41.174: INFO: Number of nodes with available pods: 0 Mar 21 23:40:41.174: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:42.161: INFO: Number of nodes with available pods: 0 Mar 21 23:40:42.161: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:43.226: INFO: Number of nodes with available pods: 0 Mar 21 23:40:43.226: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:44.265: INFO: Number of nodes with available pods: 0 Mar 21 23:40:44.265: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:45.171: INFO: Number of nodes with available pods: 0 Mar 21 23:40:45.171: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:46.242: INFO: Number of nodes with available pods: 0 Mar 21 23:40:46.242: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:47.305: INFO: Number of nodes with available pods: 0 Mar 21 23:40:47.305: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:48.337: INFO: Number of nodes with available pods: 0 Mar 21 23:40:48.337: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:49.140: INFO: Number of nodes with available pods: 0 Mar 21 23:40:49.140: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:50.189: INFO: Number of nodes with available pods: 0 Mar 21 23:40:50.190: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:51.323: INFO: Number of nodes with available pods: 0 Mar 21 23:40:51.323: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:52.713: INFO: Number of nodes with available pods: 0 Mar 21 23:40:52.713: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:53.352: INFO: Number of nodes with available pods: 0 Mar 21 23:40:53.352: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:54.166: INFO: Number of nodes with available pods: 0 Mar 21 23:40:54.166: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:55.179: INFO: Number of nodes with available pods: 0 Mar 21 23:40:55.179: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:56.143: INFO: Number of nodes with available pods: 0 Mar 21 23:40:56.143: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:57.616: INFO: Number of nodes with available pods: 0 Mar 21 23:40:57.616: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:58.190: INFO: Number of nodes with available pods: 0 Mar 21 23:40:58.190: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:40:59.169: INFO: Number of nodes with available pods: 0 Mar 21 23:40:59.169: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:00.349: INFO: Number of nodes with available pods: 0 Mar 21 23:41:00.349: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:01.179: INFO: Number of nodes with available pods: 0 Mar 21 23:41:01.179: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:02.160: INFO: Number of nodes with available pods: 0 Mar 21 23:41:02.160: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:03.251: INFO: Number of nodes with available pods: 0 Mar 21 23:41:03.251: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:04.182: INFO: Number of nodes with available pods: 0 Mar 21 23:41:04.182: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:05.165: INFO: Number of nodes with available pods: 0 Mar 21 23:41:05.165: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:06.506: INFO: Number of nodes with available pods: 0 Mar 21 23:41:06.506: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:07.272: INFO: Number of nodes with available pods: 0 Mar 21 23:41:07.272: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:08.173: INFO: Number of nodes with available pods: 0 Mar 21 23:41:08.173: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:09.184: INFO: Number of nodes with available pods: 0 Mar 21 23:41:09.184: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:10.142: INFO: Number of nodes with available pods: 0 Mar 21 23:41:10.142: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:11.155: INFO: Number of nodes with available pods: 0 Mar 21 23:41:11.155: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:12.172: INFO: Number of nodes with available pods: 0 Mar 21 23:41:12.172: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:13.161: INFO: Number of nodes with available pods: 0 Mar 21 23:41:13.161: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:14.169: INFO: Number of nodes with available pods: 0 Mar 21 23:41:14.169: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:15.139: INFO: Number of nodes with available pods: 0 Mar 21 23:41:15.139: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:16.341: INFO: Number of nodes with available pods: 0 Mar 21 23:41:16.341: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:17.146: INFO: Number of nodes with available pods: 0 Mar 21 23:41:17.146: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:18.152: INFO: Number of nodes with available pods: 0 Mar 21 23:41:18.152: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:19.160: INFO: Number of nodes with available pods: 0 Mar 21 23:41:19.160: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:20.264: INFO: Number of nodes with available pods: 0 Mar 21 23:41:20.264: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:21.852: INFO: Number of nodes with available pods: 0 Mar 21 23:41:21.852: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:22.136: INFO: Number of nodes with available pods: 0 Mar 21 23:41:22.136: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:23.166: INFO: Number of nodes with available pods: 0 Mar 21 23:41:23.166: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:24.183: INFO: Number of nodes with available pods: 0 Mar 21 23:41:24.183: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:25.179: INFO: Number of nodes with available pods: 0 Mar 21 23:41:25.179: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:26.252: INFO: Number of nodes with available pods: 0 Mar 21 23:41:26.252: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:27.149: INFO: Number of nodes with available pods: 0 Mar 21 23:41:27.149: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:28.154: INFO: Number of nodes with available pods: 0 Mar 21 23:41:28.154: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:29.167: INFO: Number of nodes with available pods: 0 Mar 21 23:41:29.167: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:30.149: INFO: Number of nodes with available pods: 0 Mar 21 23:41:30.149: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:31.133: INFO: Number of nodes with available pods: 0 Mar 21 23:41:31.133: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:32.155: INFO: Number of nodes with available pods: 0 Mar 21 23:41:32.155: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:33.211: INFO: Number of nodes with available pods: 0 Mar 21 23:41:33.211: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:34.197: INFO: Number of nodes with available pods: 0 Mar 21 23:41:34.197: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:35.242: INFO: Number of nodes with available pods: 0 Mar 21 23:41:35.243: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:36.281: INFO: Number of nodes with available pods: 0 Mar 21 23:41:36.281: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:37.209: INFO: Number of nodes with available pods: 0 Mar 21 23:41:37.209: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:38.171: INFO: Number of nodes with available pods: 0 Mar 21 23:41:38.171: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:39.136: INFO: Number of nodes with available pods: 0 Mar 21 23:41:39.136: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:40.184: INFO: Number of nodes with available pods: 0 Mar 21 23:41:40.184: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:41.137: INFO: Number of nodes with available pods: 0 Mar 21 23:41:41.137: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:42.186: INFO: Number of nodes with available pods: 0 Mar 21 23:41:42.186: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:43.233: INFO: Number of nodes with available pods: 0 Mar 21 23:41:43.233: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:44.166: INFO: Number of nodes with available pods: 0 Mar 21 23:41:44.166: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:45.371: INFO: Number of nodes with available pods: 0 Mar 21 23:41:45.371: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:46.228: INFO: Number of nodes with available pods: 0 Mar 21 23:41:46.228: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:47.444: INFO: Number of nodes with available pods: 0 Mar 21 23:41:47.444: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:48.140: INFO: Number of nodes with available pods: 0 Mar 21 23:41:48.140: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:49.372: INFO: Number of nodes with available pods: 0 Mar 21 23:41:49.372: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:50.251: INFO: Number of nodes with available pods: 0 Mar 21 23:41:50.251: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:41:51.186: INFO: Number of nodes with available pods: 1 Mar 21 23:41:51.186: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-125, will wait for the garbage collector to delete the pods Mar 21 23:41:51.676: INFO: Deleting DaemonSet.extensions daemon-set took: 290.880268ms Mar 21 23:41:52.377: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.655826ms Mar 21 23:42:45.116: INFO: Number of nodes with available pods: 0 Mar 21 23:42:45.116: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 23:42:45.134: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6948363"},"items":null} Mar 21 23:42:45.147: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6948365"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:42:45.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-125" for this suite. • [SLOW TEST:141.555 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":330,"completed":9,"skipped":122,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:42:45.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:42:45.778: INFO: The status of Pod server-envvars-f38b1502-1949-4f9f-a626-622eca4b024a is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:42:48.102: INFO: The status of Pod server-envvars-f38b1502-1949-4f9f-a626-622eca4b024a is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:42:49.798: INFO: The status of Pod server-envvars-f38b1502-1949-4f9f-a626-622eca4b024a is Running (Ready = true) Mar 21 23:42:49.985: INFO: Waiting up to 5m0s for pod "client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0" in namespace "pods-8995" to be "Succeeded or Failed" Mar 21 23:42:50.036: INFO: Pod "client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 51.530872ms Mar 21 23:42:52.057: INFO: Pod "client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072313878s Mar 21 23:42:54.246: INFO: Pod "client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261698613s Mar 21 23:42:56.311: INFO: Pod "client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.326348912s STEP: Saw pod success Mar 21 23:42:56.311: INFO: Pod "client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0" satisfied condition "Succeeded or Failed" Mar 21 23:42:56.420: INFO: Trying to get logs from node latest-worker pod client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0 container env3cont: STEP: delete the pod Mar 21 23:42:56.963: INFO: Waiting for pod client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0 to disappear Mar 21 23:42:57.137: INFO: Pod client-envvars-98c6f382-e9ef-4946-ac34-376fc5989bd0 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:42:57.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8995" for this suite. • [SLOW TEST:11.832 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":330,"completed":10,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:42:57.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-34d646da-da17-46d7-a112-599ff2f62373 STEP: Creating a pod to test consume secrets Mar 21 23:42:57.893: INFO: Waiting up to 5m0s for pod "pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7" in namespace "secrets-8245" to be "Succeeded or Failed" Mar 21 23:42:58.095: INFO: Pod "pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7": Phase="Pending", Reason="", readiness=false. Elapsed: 201.531887ms Mar 21 23:43:00.948: INFO: Pod "pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054827741s Mar 21 23:43:03.080: INFO: Pod "pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.18692309s Mar 21 23:43:05.265: INFO: Pod "pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.37139617s STEP: Saw pod success Mar 21 23:43:05.265: INFO: Pod "pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7" satisfied condition "Succeeded or Failed" Mar 21 23:43:05.271: INFO: Trying to get logs from node latest-worker pod pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7 container secret-volume-test: STEP: delete the pod Mar 21 23:43:07.233: INFO: Waiting for pod pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7 to disappear Mar 21 23:43:07.408: INFO: Pod pod-secrets-ab840055-9f2e-487e-8321-da1eac0819f7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:43:07.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8245" for this suite. • [SLOW TEST:11.188 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":11,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:08.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2957.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2957.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 23:43:20.597: INFO: DNS probes using dns-2957/dns-test-c0cee9b1-9148-46fd-b0bd-92600c144a62 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:43:20.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2957" for this suite. • [SLOW TEST:12.860 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":330,"completed":12,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:21.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 21 23:43:21.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec" in namespace "downward-api-6825" to be "Succeeded or Failed" Mar 21 23:43:21.616: INFO: Pod "downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec": Phase="Pending", Reason="", readiness=false. Elapsed: 83.763951ms Mar 21 23:43:23.816: INFO: Pod "downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284301533s Mar 21 23:43:25.899: INFO: Pod "downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366641855s Mar 21 23:43:28.199: INFO: Pod "downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.666780895s STEP: Saw pod success Mar 21 23:43:28.199: INFO: Pod "downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec" satisfied condition "Succeeded or Failed" Mar 21 23:43:28.229: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec container client-container: STEP: delete the pod Mar 21 23:43:28.423: INFO: Waiting for pod downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec to disappear Mar 21 23:43:28.498: INFO: Pod downwardapi-volume-35bc0062-4798-4982-aaac-21e810744aec no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:43:28.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6825" for this suite. • [SLOW TEST:7.304 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":13,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:28.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 21 23:43:28.887: INFO: Waiting up to 5m0s for pod "pod-1fe25d7f-0de9-4af5-bcac-645b26299811" in namespace "emptydir-1059" to be "Succeeded or Failed" Mar 21 23:43:28.939: INFO: Pod "pod-1fe25d7f-0de9-4af5-bcac-645b26299811": Phase="Pending", Reason="", readiness=false. Elapsed: 52.628295ms Mar 21 23:43:30.965: INFO: Pod "pod-1fe25d7f-0de9-4af5-bcac-645b26299811": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078100454s Mar 21 23:43:33.079: INFO: Pod "pod-1fe25d7f-0de9-4af5-bcac-645b26299811": Phase="Running", Reason="", readiness=true. Elapsed: 4.192077822s Mar 21 23:43:35.137: INFO: Pod "pod-1fe25d7f-0de9-4af5-bcac-645b26299811": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.250432898s STEP: Saw pod success Mar 21 23:43:35.137: INFO: Pod "pod-1fe25d7f-0de9-4af5-bcac-645b26299811" satisfied condition "Succeeded or Failed" Mar 21 23:43:35.250: INFO: Trying to get logs from node latest-worker pod pod-1fe25d7f-0de9-4af5-bcac-645b26299811 container test-container: STEP: delete the pod Mar 21 23:43:35.372: INFO: Waiting for pod pod-1fe25d7f-0de9-4af5-bcac-645b26299811 to disappear Mar 21 23:43:35.380: INFO: Pod pod-1fe25d7f-0de9-4af5-bcac-645b26299811 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:43:35.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1059" for this suite. • [SLOW TEST:6.822 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":14,"skipped":269,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:35.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-c1cca6de-458d-4e59-9a36-8ec8f45b1a66 STEP: Creating a pod to test consume configMaps Mar 21 23:43:35.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad" in namespace "configmap-8948" to be "Succeeded or Failed" Mar 21 23:43:35.667: INFO: Pod "pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad": Phase="Pending", Reason="", readiness=false. Elapsed: 45.458899ms Mar 21 23:43:38.081: INFO: Pod "pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459739725s Mar 21 23:43:40.181: INFO: Pod "pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.559976665s STEP: Saw pod success Mar 21 23:43:40.181: INFO: Pod "pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad" satisfied condition "Succeeded or Failed" Mar 21 23:43:40.184: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad container configmap-volume-test: STEP: delete the pod Mar 21 23:43:40.618: INFO: Waiting for pod pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad to disappear Mar 21 23:43:40.821: INFO: Pod pod-configmaps-c497d198-2879-4721-8468-04a409cb7fad no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:43:40.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8948" for this suite. • [SLOW TEST:5.621 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":330,"completed":15,"skipped":277,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:41.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 21 23:43:41.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-5366 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Mar 21 23:43:41.445: INFO: stderr: "" Mar 21 23:43:41.445: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 21 23:43:46.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-5366 get pod e2e-test-httpd-pod -o json' Mar 21 23:43:46.628: INFO: stderr: "" Mar 21 23:43:46.628: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-03-21T23:43:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5366\",\n \"resourceVersion\": \"6949701\",\n \"uid\": \"70693d2b-d09c-4006-b7f3-7d6a2b80e9fe\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jdxjt\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jdxjt\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jdxjt\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-21T23:43:41Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-21T23:43:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-21T23:43:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-03-21T23:43:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://909e5aee785cfd4dd4b242eb00bff894105ced4cd4397454fb8d07bec870e11a\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-03-21T23:43:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.9\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.159\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.159\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-03-21T23:43:41Z\"\n }\n}\n" STEP: replace the image in the pod Mar 21 23:43:46.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-5366 replace -f -' Mar 21 23:43:46.997: INFO: stderr: "" Mar 21 23:43:46.997: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 Mar 21 23:43:47.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-5366 delete pods e2e-test-httpd-pod' Mar 21 23:43:55.810: INFO: stderr: "" Mar 21 23:43:55.810: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:43:55.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5366" for this suite. • [SLOW TEST:14.877 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":330,"completed":16,"skipped":278,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:43:55.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-9918 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9918 to expose endpoints map[] Mar 21 23:43:56.425: INFO: successfully validated that service multi-endpoint-test in namespace services-9918 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9918 Mar 21 23:43:56.538: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:43:58.551: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:44:00.696: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:44:02.576: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9918 to expose endpoints map[pod1:[100]] Mar 21 23:44:02.720: INFO: successfully validated that service multi-endpoint-test in namespace services-9918 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9918 Mar 21 23:44:02.887: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:44:05.085: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:44:06.911: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:44:08.963: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9918 to expose endpoints map[pod1:[100] pod2:[101]] Mar 21 23:44:09.259: INFO: successfully validated that service multi-endpoint-test in namespace services-9918 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9918 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9918 to expose endpoints map[pod2:[101]] Mar 21 23:44:10.965: INFO: successfully validated that service multi-endpoint-test in namespace services-9918 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9918 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9918 to expose endpoints map[] Mar 21 23:44:11.440: INFO: successfully validated that service multi-endpoint-test in namespace services-9918 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:44:12.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9918" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:16.843 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":330,"completed":17,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:44:12.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-4f04a6b9-4415-45de-8587-75ac655ba9a4 in namespace container-probe-2505 Mar 21 23:44:19.786: INFO: Started pod busybox-4f04a6b9-4415-45de-8587-75ac655ba9a4 in namespace container-probe-2505 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 23:44:19.825: INFO: Initial restart count of pod busybox-4f04a6b9-4415-45de-8587-75ac655ba9a4 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:21.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2505" for this suite. • [SLOW TEST:249.565 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":330,"completed":18,"skipped":352,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:22.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:40.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-116" for this suite. • [SLOW TEST:18.554 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":330,"completed":19,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:40.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-91f68320-2be2-4d0c-8002-08017fcd193b STEP: Creating a pod to test consume secrets Mar 21 23:48:41.121: INFO: Waiting up to 5m0s for pod "pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b" in namespace "secrets-8703" to be "Succeeded or Failed" Mar 21 23:48:41.150: INFO: Pod "pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.568544ms Mar 21 23:48:43.460: INFO: Pod "pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339304631s Mar 21 23:48:45.472: INFO: Pod "pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b": Phase="Running", Reason="", readiness=true. Elapsed: 4.350975692s Mar 21 23:48:47.538: INFO: Pod "pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.41740602s STEP: Saw pod success Mar 21 23:48:47.538: INFO: Pod "pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b" satisfied condition "Succeeded or Failed" Mar 21 23:48:47.602: INFO: Trying to get logs from node latest-worker pod pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b container secret-env-test: STEP: delete the pod Mar 21 23:48:47.850: INFO: Waiting for pod pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b to disappear Mar 21 23:48:47.865: INFO: Pod pod-secrets-65e95642-ba23-4123-975e-04ae9fff805b no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:47.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8703" for this suite. • [SLOW TEST:7.140 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":330,"completed":20,"skipped":416,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:48.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-1049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1049 to expose endpoints map[] Mar 21 23:48:48.432: INFO: successfully validated that service endpoint-test2 in namespace services-1049 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-1049 Mar 21 23:48:48.530: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:48:51.020: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:48:52.580: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:48:54.573: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1049 to expose endpoints map[pod1:[80]] Mar 21 23:48:54.695: INFO: successfully validated that service endpoint-test2 in namespace services-1049 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-1049 Mar 21 23:48:54.760: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:48:57.156: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:48:58.781: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1049 to expose endpoints map[pod1:[80] pod2:[80]] Mar 21 23:48:58.960: INFO: successfully validated that service endpoint-test2 in namespace services-1049 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-1049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1049 to expose endpoints map[pod2:[80]] Mar 21 23:48:59.170: INFO: successfully validated that service endpoint-test2 in namespace services-1049 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-1049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1049 to expose endpoints map[] Mar 21 23:48:59.230: INFO: successfully validated that service endpoint-test2 in namespace services-1049 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:59.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1049" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:11.951 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":330,"completed":21,"skipped":429,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:59.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created Mar 21 23:49:00.795: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:49:02.809: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:49:04.891: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:49:06.907: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 21 23:49:08.548: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:49:10.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1569" for this suite. • [SLOW TEST:10.666 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":330,"completed":22,"skipped":432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:49:10.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-9133/configmap-test-8d781486-eff4-4e70-99ab-0cf7c69c9f82 STEP: Creating a pod to test consume configMaps Mar 21 23:49:11.704: INFO: Waiting up to 5m0s for pod "pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864" in namespace "configmap-9133" to be "Succeeded or Failed" Mar 21 23:49:12.021: INFO: Pod "pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864": Phase="Pending", Reason="", readiness=false. Elapsed: 317.410557ms Mar 21 23:49:14.080: INFO: Pod "pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375604152s Mar 21 23:49:16.383: INFO: Pod "pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678886917s Mar 21 23:49:18.488: INFO: Pod "pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.78390391s STEP: Saw pod success Mar 21 23:49:18.488: INFO: Pod "pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864" satisfied condition "Succeeded or Failed" Mar 21 23:49:18.555: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864 container env-test: STEP: delete the pod Mar 21 23:49:18.743: INFO: Waiting for pod pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864 to disappear Mar 21 23:49:18.807: INFO: Pod pod-configmaps-10d1f73e-ff7a-4831-87d6-38a1c2f10864 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:49:18.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9133" for this suite. • [SLOW TEST:8.252 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":330,"completed":23,"skipped":455,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:49:18.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 23:49:20.752: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 23:49:23.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967360, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967360, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967361, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967360, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:49:26.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967360, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967360, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967361, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967360, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 23:49:28.936: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 21 23:49:29.961: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:49:29.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3574" for this suite. STEP: Destroying namespace "webhook-3574-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.970 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":330,"completed":24,"skipped":471,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:49:31.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Mar 21 23:49:32.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 create -f -' Mar 21 23:49:32.721: INFO: stderr: "" Mar 21 23:49:32.721: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 23:49:32.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:49:33.266: INFO: stderr: "" Mar 21 23:49:33.266: INFO: stdout: "update-demo-nautilus-9qhtl update-demo-nautilus-fl6rc " Mar 21 23:49:33.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-9qhtl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:49:33.355: INFO: stderr: "" Mar 21 23:49:33.355: INFO: stdout: "" Mar 21 23:49:33.355: INFO: update-demo-nautilus-9qhtl is created but not running Mar 21 23:49:38.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:49:38.879: INFO: stderr: "" Mar 21 23:49:38.879: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " Mar 21 23:49:38.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:49:39.490: INFO: stderr: "" Mar 21 23:49:39.490: INFO: stdout: "" Mar 21 23:49:39.490: INFO: update-demo-nautilus-fl6rc is created but not running Mar 21 23:49:44.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:49:45.044: INFO: stderr: "" Mar 21 23:49:45.044: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " Mar 21 23:49:45.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:49:45.818: INFO: stderr: "" Mar 21 23:49:45.818: INFO: stdout: "true" Mar 21 23:49:45.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 21 23:49:46.156: INFO: stderr: "" Mar 21 23:49:46.156: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 21 23:49:46.156: INFO: validating pod update-demo-nautilus-fl6rc Mar 21 23:49:46.474: INFO: got data: { "image": "nautilus.jpg" } Mar 21 23:49:46.474: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 23:49:46.474: INFO: update-demo-nautilus-fl6rc is verified up and running Mar 21 23:49:46.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-mprpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:49:48.237: INFO: stderr: "" Mar 21 23:49:48.238: INFO: stdout: "true" Mar 21 23:49:48.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-mprpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 21 23:49:48.745: INFO: stderr: "" Mar 21 23:49:48.745: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 21 23:49:48.745: INFO: validating pod update-demo-nautilus-mprpb Mar 21 23:49:48.769: INFO: got data: { "image": "nautilus.jpg" } Mar 21 23:49:48.769: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 23:49:48.769: INFO: update-demo-nautilus-mprpb is verified up and running STEP: scaling down the replication controller Mar 21 23:49:48.772: INFO: scanned /root for discovery docs: Mar 21 23:49:48.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Mar 21 23:49:50.728: INFO: stderr: "" Mar 21 23:49:50.728: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 23:49:50.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:49:50.908: INFO: stderr: "" Mar 21 23:49:50.908: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:49:55.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:49:58.362: INFO: stderr: "" Mar 21 23:49:58.362: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:03.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:04.354: INFO: stderr: "" Mar 21 23:50:04.354: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:09.355: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:09.486: INFO: stderr: "" Mar 21 23:50:09.486: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:14.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:14.591: INFO: stderr: "" Mar 21 23:50:14.591: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:19.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:19.715: INFO: stderr: "" Mar 21 23:50:19.715: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:24.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:24.891: INFO: stderr: "" Mar 21 23:50:24.891: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:29.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:30.049: INFO: stderr: "" Mar 21 23:50:30.049: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:35.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:35.173: INFO: stderr: "" Mar 21 23:50:35.173: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:40.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:40.300: INFO: stderr: "" Mar 21 23:50:40.300: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:45.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:45.425: INFO: stderr: "" Mar 21 23:50:45.425: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:50.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:50.547: INFO: stderr: "" Mar 21 23:50:50.547: INFO: stdout: "update-demo-nautilus-fl6rc update-demo-nautilus-mprpb " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 21 23:50:55.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:55.672: INFO: stderr: "" Mar 21 23:50:55.672: INFO: stdout: "update-demo-nautilus-fl6rc " Mar 21 23:50:55.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:50:55.782: INFO: stderr: "" Mar 21 23:50:55.782: INFO: stdout: "true" Mar 21 23:50:55.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 21 23:50:55.918: INFO: stderr: "" Mar 21 23:50:55.918: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 21 23:50:55.918: INFO: validating pod update-demo-nautilus-fl6rc Mar 21 23:50:55.971: INFO: got data: { "image": "nautilus.jpg" } Mar 21 23:50:55.971: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 23:50:55.971: INFO: update-demo-nautilus-fl6rc is verified up and running STEP: scaling up the replication controller Mar 21 23:50:55.974: INFO: scanned /root for discovery docs: Mar 21 23:50:55.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Mar 21 23:50:57.156: INFO: stderr: "" Mar 21 23:50:57.156: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 21 23:50:57.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:50:57.329: INFO: stderr: "" Mar 21 23:50:57.329: INFO: stdout: "update-demo-nautilus-72sdf update-demo-nautilus-fl6rc " Mar 21 23:50:57.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-72sdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:50:57.486: INFO: stderr: "" Mar 21 23:50:57.486: INFO: stdout: "" Mar 21 23:50:57.486: INFO: update-demo-nautilus-72sdf is created but not running Mar 21 23:51:02.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 21 23:51:02.630: INFO: stderr: "" Mar 21 23:51:02.631: INFO: stdout: "update-demo-nautilus-72sdf update-demo-nautilus-fl6rc " Mar 21 23:51:02.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-72sdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:51:02.761: INFO: stderr: "" Mar 21 23:51:02.761: INFO: stdout: "true" Mar 21 23:51:02.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-72sdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 21 23:51:02.943: INFO: stderr: "" Mar 21 23:51:02.943: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 21 23:51:02.943: INFO: validating pod update-demo-nautilus-72sdf Mar 21 23:51:03.000: INFO: got data: { "image": "nautilus.jpg" } Mar 21 23:51:03.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 23:51:03.000: INFO: update-demo-nautilus-72sdf is verified up and running Mar 21 23:51:03.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 21 23:51:03.116: INFO: stderr: "" Mar 21 23:51:03.116: INFO: stdout: "true" Mar 21 23:51:03.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods update-demo-nautilus-fl6rc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 21 23:51:03.257: INFO: stderr: "" Mar 21 23:51:03.257: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 21 23:51:03.257: INFO: validating pod update-demo-nautilus-fl6rc Mar 21 23:51:03.313: INFO: got data: { "image": "nautilus.jpg" } Mar 21 23:51:03.313: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 21 23:51:03.313: INFO: update-demo-nautilus-fl6rc is verified up and running STEP: using delete to clean up resources Mar 21 23:51:03.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 delete --grace-period=0 --force -f -' Mar 21 23:51:03.469: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 21 23:51:03.469: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 21 23:51:03.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get rc,svc -l name=update-demo --no-headers' Mar 21 23:51:03.646: INFO: stderr: "No resources found in kubectl-4814 namespace.\n" Mar 21 23:51:03.646: INFO: stdout: "" Mar 21 23:51:03.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 23:51:03.765: INFO: stderr: "" Mar 21 23:51:03.765: INFO: stdout: "update-demo-nautilus-72sdf\nupdate-demo-nautilus-fl6rc\n" Mar 21 23:51:04.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get rc,svc -l name=update-demo --no-headers' Mar 21 23:51:04.841: INFO: stderr: "No resources found in kubectl-4814 namespace.\n" Mar 21 23:51:04.841: INFO: stdout: "" Mar 21 23:51:04.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4814 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 21 23:51:04.960: INFO: stderr: "" Mar 21 23:51:04.960: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:51:04.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4814" for this suite. • [SLOW TEST:93.532 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":330,"completed":25,"skipped":472,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:51:05.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:51:05.618: INFO: Creating pod... Mar 21 23:51:05.797: INFO: Pod Quantity: 1 Status: Pending Mar 21 23:51:06.803: INFO: Pod Quantity: 1 Status: Pending Mar 21 23:51:08.258: INFO: Pod Quantity: 1 Status: Pending Mar 21 23:51:08.852: INFO: Pod Quantity: 1 Status: Pending Mar 21 23:51:09.801: INFO: Pod Status: Running Mar 21 23:51:09.801: INFO: Creating service... Mar 21 23:51:09.995: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/DELETE Mar 21 23:51:10.033: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Mar 21 23:51:10.033: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/GET Mar 21 23:51:10.145: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Mar 21 23:51:10.145: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/HEAD Mar 21 23:51:10.175: INFO: http.Client request:HEAD | StatusCode:200 Mar 21 23:51:10.175: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/OPTIONS Mar 21 23:51:10.210: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Mar 21 23:51:10.210: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/PATCH Mar 21 23:51:10.270: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Mar 21 23:51:10.270: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/POST Mar 21 23:51:10.314: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Mar 21 23:51:10.314: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/pods/agnhost/proxy/some/path/with/PUT Mar 21 23:51:10.337: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Mar 21 23:51:10.337: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/DELETE Mar 21 23:51:10.366: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Mar 21 23:51:10.366: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/GET Mar 21 23:51:10.460: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Mar 21 23:51:10.460: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/HEAD Mar 21 23:51:10.535: INFO: http.Client request:HEAD | StatusCode:200 Mar 21 23:51:10.535: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/OPTIONS Mar 21 23:51:10.572: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Mar 21 23:51:10.572: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/PATCH Mar 21 23:51:10.599: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Mar 21 23:51:10.599: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/POST Mar 21 23:51:10.632: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Mar 21 23:51:10.632: INFO: Starting http.Client for https://172.30.12.66:41865/api/v1/namespaces/proxy-7186/services/test-service/proxy/some/path/with/PUT Mar 21 23:51:10.674: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:51:10.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7186" for this suite. • [SLOW TEST:5.447 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":330,"completed":26,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:51:10.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:51:11.050: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 21 23:51:16.090: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 21 23:51:16.090: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 21 23:51:18.438: INFO: Creating deployment "test-rollover-deployment" Mar 21 23:51:18.950: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 21 23:51:21.253: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 21 23:51:21.346: INFO: Ensure that both replica sets have 1 created replica Mar 21 23:51:21.442: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 21 23:51:21.477: INFO: Updating deployment test-rollover-deployment Mar 21 23:51:21.477: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 21 23:51:23.568: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 21 23:51:23.714: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 21 23:51:23.918: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:23.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967482, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:25.975: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:25.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967482, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:28.018: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:28.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967486, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:30.038: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:30.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967486, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:31.983: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:31.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967486, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:33.975: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:33.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967486, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:35.971: INFO: all replica sets need to contain the pod-template-hash label Mar 21 23:51:35.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967486, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967479, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6585455996\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:51:37.942: INFO: Mar 21 23:51:37.942: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 21 23:51:38.376: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-645 1ae0e679-b440-4492-ad20-c3a4b103dd13 6961558 2 2021-03-21 23:51:18 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-21 23:51:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-21 23:51:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001ab9848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-21 23:51:19 +0000 UTC,LastTransitionTime:2021-03-21 23:51:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6585455996" has successfully progressed.,LastUpdateTime:2021-03-21 23:51:37 +0000 UTC,LastTransitionTime:2021-03-21 23:51:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 21 23:51:38.399: INFO: New ReplicaSet "test-rollover-deployment-6585455996" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-6585455996 deployment-645 64ee26bb-0e37-409f-ab43-05e441c74cf6 6961541 2 2021-03-21 23:51:21 +0000 UTC map[name:rollover-pod pod-template-hash:6585455996] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 1ae0e679-b440-4492-ad20-c3a4b103dd13 0xc001ab9cc7 0xc001ab9cc8}] [] [{kube-controller-manager Update apps/v1 2021-03-21 23:51:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ae0e679-b440-4492-ad20-c3a4b103dd13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6585455996,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6585455996] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001ab9d58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 21 23:51:38.399: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 21 23:51:38.399: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-645 3856268c-f1b3-4e13-87ea-76d7f7248d48 6961556 2 2021-03-21 23:51:10 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 1ae0e679-b440-4492-ad20-c3a4b103dd13 0xc001ab9ba7 0xc001ab9ba8}] [] [{e2e.test Update apps/v1 2021-03-21 23:51:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-21 23:51:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ae0e679-b440-4492-ad20-c3a4b103dd13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001ab9c48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 23:51:38.399: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-645 b5f6c62b-492c-4dfe-bde8-aea83b523f97 6961066 2 2021-03-21 23:51:19 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 1ae0e679-b440-4492-ad20-c3a4b103dd13 0xc001ab9dc7 0xc001ab9dc8}] [] [{kube-controller-manager Update apps/v1 2021-03-21 23:51:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ae0e679-b440-4492-ad20-c3a4b103dd13\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001ab9e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 21 23:51:38.442: INFO: Pod "test-rollover-deployment-6585455996-mmv57" is available: &Pod{ObjectMeta:{test-rollover-deployment-6585455996-mmv57 test-rollover-deployment-6585455996- deployment-645 2688aecb-80e3-4749-8a27-381ea6a1c179 6961177 0 2021-03-21 23:51:21 +0000 UTC map[name:rollover-pod pod-template-hash:6585455996] map[] [{apps/v1 ReplicaSet test-rollover-deployment-6585455996 64ee26bb-0e37-409f-ab43-05e441c74cf6 0xc00036f777 0xc00036f778}] [] [{kube-controller-manager Update v1 2021-03-21 23:51:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"64ee26bb-0e37-409f-ab43-05e441c74cf6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-21 23:51:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.209\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qgcwf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qgcwf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qgcwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-21 23:51:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-21 23:51:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-21 23:51:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-21 23:51:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.209,StartTime:2021-03-21 23:51:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-21 23:51:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://2969e263b3388990961e0d82a41d4579ed733d3c5759b5bdec0387ab067e5fde,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.209,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:51:38.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-645" for this suite. • [SLOW TEST:27.714 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":330,"completed":27,"skipped":510,"failed":0} S ------------------------------ [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:51:38.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 21 23:51:38.672: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:51:49.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-706" for this suite. • [SLOW TEST:11.197 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":330,"completed":28,"skipped":511,"failed":0} SSSS ------------------------------ [sig-node] Probing container should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:51:49.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-df7ee161-a048-44d1-98d9-7803448b0e1c in namespace container-probe-1143 Mar 21 23:51:54.313: INFO: Started pod busybox-df7ee161-a048-44d1-98d9-7803448b0e1c in namespace container-probe-1143 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 23:51:54.339: INFO: Initial restart count of pod busybox-df7ee161-a048-44d1-98d9-7803448b0e1c is 0 Mar 21 23:52:46.134: INFO: Restart count of pod container-probe-1143/busybox-df7ee161-a048-44d1-98d9-7803448b0e1c is now 1 (51.794773241s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:52:46.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1143" for this suite. • [SLOW TEST:56.592 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [NodeConformance] [Conformance]","total":330,"completed":29,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:52:46.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:52:46.652: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:52:48.734: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:52:50.733: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:52:52.686: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:52:54.704: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:52:56.688: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:52:58.664: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:00.668: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:02.670: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:04.713: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:06.684: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:08.685: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:10.739: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:12.679: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = false) Mar 21 23:53:14.680: INFO: The status of Pod test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 is Running (Ready = true) Mar 21 23:53:14.747: INFO: Container started at 2021-03-21 23:52:50 +0000 UTC, pod became ready at 2021-03-21 23:53:13 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:14.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1141" for this suite. • [SLOW TEST:28.452 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":330,"completed":30,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:14.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 23:53:16.362: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 23:53:19.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:53:21.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967596, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 23:53:24.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:25.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2871" for this suite. STEP: Destroying namespace "webhook-2871-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.122 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":330,"completed":31,"skipped":568,"failed":0} [sig-node] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:25.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod Mar 21 23:53:26.297: INFO: The status of Pod pod-hostip-d301c4b8-bc1f-4598-99cd-4e7f8e1a3e21 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:53:28.494: INFO: The status of Pod pod-hostip-d301c4b8-bc1f-4598-99cd-4e7f8e1a3e21 is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:53:30.326: INFO: The status of Pod pod-hostip-d301c4b8-bc1f-4598-99cd-4e7f8e1a3e21 is Running (Ready = true) Mar 21 23:53:30.403: INFO: Pod pod-hostip-d301c4b8-bc1f-4598-99cd-4e7f8e1a3e21 has hostIP: 172.18.0.9 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:30.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5987" for this suite. •{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":330,"completed":32,"skipped":568,"failed":0} SSSSSS ------------------------------ [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:30.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob Mar 21 23:53:30.683: FAIL: Failed to create CronJob in namespace cronjob-3615 Unexpected error: <*errors.StatusError | 0xc001f06820>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:132 +0x1f1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-3615". STEP: Found 0 events. Mar 21 23:53:30.722: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:53:30.722: INFO: Mar 21 23:53:30.825: INFO: Logging node info for node latest-control-plane Mar 21 23:53:30.871: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6958570 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:49:31 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:49:31 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:49:31 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:49:31 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:53:30.872: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:53:30.918: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:53:30.985: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container etcd ready: true, restart count 0 Mar 21 23:53:30.986: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:53:30.986: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 21 23:53:30.986: INFO: coredns-74ff55c5b-7rm8b started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container coredns ready: true, restart count 0 Mar 21 23:53:30.986: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:53:30.986: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:53:30.986: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:53:30.986: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:53:30.986: INFO: coredns-74ff55c5b-xcknl started at 2021-03-21 23:31:55 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:30.986: INFO: Container coredns ready: true, restart count 0 W0321 23:53:30.994962 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:53:31.190: INFO: Latency metrics for node latest-control-plane Mar 21 23:53:31.190: INFO: Logging node info for node latest-worker Mar 21 23:53:31.196: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6960140 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:50:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:53:31.197: INFO: Logging kubelet events for node latest-worker Mar 21 23:53:31.279: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:53:31.307: INFO: test-webserver-2866f0e1-e563-4f09-beef-bf9472c392e3 started at 2021-03-21 23:52:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.307: INFO: Container test-webserver ready: false, restart count 0 Mar 21 23:53:31.307: INFO: pod-hostip-d301c4b8-bc1f-4598-99cd-4e7f8e1a3e21 started at 2021-03-21 23:53:26 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.307: INFO: Container test ready: true, restart count 0 Mar 21 23:53:31.307: INFO: liveness-1c573071-af64-461d-8a69-6177fa223edd started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.307: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:53:31.307: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.307: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:53:31.307: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.307: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:53:31.307: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.307: INFO: Container kindnet-cni ready: true, restart count 0 W0321 23:53:31.354943 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:53:31.631: INFO: Latency metrics for node latest-worker Mar 21 23:53:31.631: INFO: Logging node info for node latest-worker2 Mar 21 23:53:31.699: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6964772 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-739":"csi-mock-csi-mock-volumes-739","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:52:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:53:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:53:22 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:53:31.700: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:53:31.732: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:53:31.798: INFO: service-proxy-toggled-d7fcz started at 2021-03-21 23:49:13 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 21 23:53:31.798: INFO: pvc-volume-tester-4bp25 started at 2021-03-21 23:52:52 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container volume-tester ready: false, restart count 0 Mar 21 23:53:31.798: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:53:31.798: INFO: service-proxy-toggled-kkztj started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 21 23:53:31.798: INFO: service-proxy-toggled-64tkf started at 2021-03-21 23:49:13 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container service-proxy-toggled ready: true, restart count 0 Mar 21 23:53:31.798: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:53:31.798: INFO: chaos-daemon-95pmt started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:53:31.798: INFO: service-proxy-disabled-j2gv8 started at 2021-03-21 23:49:04 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 21 23:53:31.798: INFO: service-proxy-disabled-86srx started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 21 23:53:31.798: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:53:31.798: INFO: service-proxy-disabled-tl9jg started at 2021-03-21 23:49:36 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container service-proxy-disabled ready: true, restart count 0 Mar 21 23:53:31.798: INFO: csi-mockplugin-attacher-0 started at 2021-03-21 23:52:36 +0000 UTC (0+1 container statuses recorded) Mar 21 23:53:31.798: INFO: Container csi-attacher ready: true, restart count 0 Mar 21 23:53:31.798: INFO: csi-mockplugin-0 started at 2021-03-21 23:52:36 +0000 UTC (0+3 container statuses recorded) Mar 21 23:53:31.798: INFO: Container csi-provisioner ready: true, restart count 0 Mar 21 23:53:31.798: INFO: Container driver-registrar ready: true, restart count 0 Mar 21 23:53:31.798: INFO: Container mock ready: true, restart count 0 W0321 23:53:31.890101 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:53:32.317: INFO: Latency metrics for node latest-worker2 Mar 21 23:53:32.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3615" for this suite. • Failure [2.000 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:53:30.683: Failed to create CronJob in namespace cronjob-3615 Unexpected error: <*errors.StatusError | 0xc001f06820>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:132 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":330,"completed":32,"skipped":574,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:32.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-d11fa3ea-e225-4ed2-b0bf-c6a972ffdf53 STEP: Creating a pod to test consume configMaps Mar 21 23:53:32.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999" in namespace "projected-70" to be "Succeeded or Failed" Mar 21 23:53:32.799: INFO: Pod "pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999": Phase="Pending", Reason="", readiness=false. Elapsed: 133.756489ms Mar 21 23:53:35.073: INFO: Pod "pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.407502708s Mar 21 23:53:37.423: INFO: Pod "pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.757615007s STEP: Saw pod success Mar 21 23:53:37.423: INFO: Pod "pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999" satisfied condition "Succeeded or Failed" Mar 21 23:53:37.428: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999 container agnhost-container: STEP: delete the pod Mar 21 23:53:37.581: INFO: Waiting for pod pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999 to disappear Mar 21 23:53:37.631: INFO: Pod pod-projected-configmaps-cec4bd40-c7b1-4d2e-ba98-083aa5473999 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:37.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-70" for this suite. • [SLOW TEST:5.289 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":33,"skipped":584,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:37.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 21 23:53:38.117: INFO: Waiting up to 5m0s for pod "pod-bd5224bb-a416-4721-ab2b-8c2b24c86686" in namespace "emptydir-6209" to be "Succeeded or Failed" Mar 21 23:53:38.157: INFO: Pod "pod-bd5224bb-a416-4721-ab2b-8c2b24c86686": Phase="Pending", Reason="", readiness=false. Elapsed: 39.589671ms Mar 21 23:53:40.219: INFO: Pod "pod-bd5224bb-a416-4721-ab2b-8c2b24c86686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101719551s Mar 21 23:53:42.225: INFO: Pod "pod-bd5224bb-a416-4721-ab2b-8c2b24c86686": Phase="Running", Reason="", readiness=true. Elapsed: 4.107820308s Mar 21 23:53:44.257: INFO: Pod "pod-bd5224bb-a416-4721-ab2b-8c2b24c86686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14011777s STEP: Saw pod success Mar 21 23:53:44.257: INFO: Pod "pod-bd5224bb-a416-4721-ab2b-8c2b24c86686" satisfied condition "Succeeded or Failed" Mar 21 23:53:44.275: INFO: Trying to get logs from node latest-worker pod pod-bd5224bb-a416-4721-ab2b-8c2b24c86686 container test-container: STEP: delete the pod Mar 21 23:53:44.420: INFO: Waiting for pod pod-bd5224bb-a416-4721-ab2b-8c2b24c86686 to disappear Mar 21 23:53:44.488: INFO: Pod pod-bd5224bb-a416-4721-ab2b-8c2b24c86686 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:44.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6209" for this suite. • [SLOW TEST:6.838 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":34,"skipped":585,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:44.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0b353e4c-ef6b-4e71-8fd5-ccba99b92599 STEP: Creating a pod to test consume configMaps Mar 21 23:53:44.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f" in namespace "configmap-2683" to be "Succeeded or Failed" Mar 21 23:53:44.892: INFO: Pod "pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.807268ms Mar 21 23:53:46.922: INFO: Pod "pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080590475s Mar 21 23:53:49.076: INFO: Pod "pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234620267s Mar 21 23:53:51.089: INFO: Pod "pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247075431s STEP: Saw pod success Mar 21 23:53:51.089: INFO: Pod "pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f" satisfied condition "Succeeded or Failed" Mar 21 23:53:51.184: INFO: Trying to get logs from node latest-worker pod pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f container agnhost-container: STEP: delete the pod Mar 21 23:53:51.370: INFO: Waiting for pod pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f to disappear Mar 21 23:53:51.404: INFO: Pod pod-configmaps-25ac3aed-2f78-4c15-8504-a481c9a72e8f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:51.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2683" for this suite. • [SLOW TEST:6.969 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":330,"completed":35,"skipped":600,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:51.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image Mar 21 23:53:51.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2665 create -f -' Mar 21 23:53:52.076: INFO: stderr: "" Mar 21 23:53:52.076: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Mar 21 23:53:52.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2665 diff -f -' Mar 21 23:53:52.552: INFO: rc: 1 Mar 21 23:53:52.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2665 delete -f -' Mar 21 23:53:52.762: INFO: stderr: "" Mar 21 23:53:52.762: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:52.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2665" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":330,"completed":36,"skipped":619,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:52.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:53:53.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7061" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":330,"completed":37,"skipped":658,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:53:53.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1893.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1893.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 23:54:02.515: INFO: DNS probes using dns-test-4022fb75-6e46-452d-a810-9eca81001700 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1893.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1893.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 23:54:11.217: INFO: File wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:11.229: INFO: File jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:11.229: INFO: Lookups using dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 failed for: [wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local] Mar 21 23:54:16.681: INFO: File wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:16.711: INFO: File jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:16.711: INFO: Lookups using dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 failed for: [wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local] Mar 21 23:54:21.298: INFO: File wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:21.344: INFO: File jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:21.344: INFO: Lookups using dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 failed for: [wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local] Mar 21 23:54:26.406: INFO: File wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:26.463: INFO: File jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local from pod dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 21 23:54:26.463: INFO: Lookups using dns-1893/dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 failed for: [wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local] Mar 21 23:54:31.772: INFO: DNS probes using dns-test-25710ee4-5b5b-480e-a26a-cf2f8348d6d8 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1893.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1893.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1893.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1893.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 21 23:54:41.172: INFO: DNS probes using dns-test-99543f5f-d539-4aa8-8110-3e65fb13fd1f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:54:43.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1893" for this suite. • [SLOW TEST:49.560 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":330,"completed":38,"skipped":681,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:54:43.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8387 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8387 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8387 Mar 21 23:54:44.751: INFO: Found 0 stateful pods, waiting for 1 Mar 21 23:54:54.789: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 21 23:54:54.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 23:54:55.162: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 21 23:54:55.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 23:54:55.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 23:54:55.213: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 21 23:55:05.342: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 23:55:05.342: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 23:55:05.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999452s Mar 21 23:55:06.939: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.689898747s Mar 21 23:55:07.945: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.611547998s Mar 21 23:55:09.019: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.6070614s Mar 21 23:55:10.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.532633285s Mar 21 23:55:11.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.441607521s Mar 21 23:55:12.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.354825033s Mar 21 23:55:13.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.350477198s Mar 21 23:55:14.382: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.313473397s Mar 21 23:55:15.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 169.851947ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8387 Mar 21 23:55:16.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 23:55:16.716: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 21 23:55:16.716: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 23:55:16.716: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 23:55:16.736: INFO: Found 1 stateful pods, waiting for 3 Mar 21 23:55:27.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:55:27.080: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:55:27.080: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 21 23:55:36.764: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:55:36.764: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 21 23:55:36.764: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 21 23:55:36.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 23:55:37.169: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 21 23:55:37.169: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 23:55:37.169: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 23:55:37.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 23:55:37.452: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 21 23:55:37.452: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 23:55:37.452: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 23:55:37.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 21 23:55:37.782: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 21 23:55:37.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 21 23:55:37.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 21 23:55:37.782: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 23:55:37.857: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 21 23:55:47.929: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 21 23:55:47.929: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 21 23:55:47.929: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 21 23:55:48.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999341s Mar 21 23:55:49.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.874478301s Mar 21 23:55:50.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.842683993s Mar 21 23:55:51.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.794128206s Mar 21 23:55:52.320: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.784401688s Mar 21 23:55:53.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.728231186s Mar 21 23:55:54.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.700276473s Mar 21 23:55:55.419: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.678184271s Mar 21 23:55:56.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.629464765s Mar 21 23:55:57.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 585.373428ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8387 Mar 21 23:55:58.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 23:55:58.731: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 21 23:55:58.731: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 23:55:58.731: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 23:55:58.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 23:55:59.017: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 21 23:55:59.017: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 23:55:59.017: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 23:55:59.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8387 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 21 23:55:59.386: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 21 23:55:59.386: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 21 23:55:59.386: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 21 23:55:59.386: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 21 23:58:19.560: INFO: Deleting all statefulset in ns statefulset-8387 Mar 21 23:58:19.567: INFO: Scaling statefulset ss to 0 Mar 21 23:58:19.678: INFO: Waiting for statefulset status.replicas updated to 0 Mar 21 23:58:19.696: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:58:19.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8387" for this suite. • [SLOW TEST:216.414 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":330,"completed":39,"skipped":710,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSS ------------------------------ [sig-node] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:58:19.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-709b3ed1-70aa-4a7a-9ba1-0068b184e290 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:58:20.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2831" for this suite. •{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":330,"completed":40,"skipped":716,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:58:20.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 21 23:58:20.845: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:20.883: INFO: Number of nodes with available pods: 0 Mar 21 23:58:20.883: INFO: Node latest-worker is running more than one daemon pod Mar 21 23:58:22.016: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:22.044: INFO: Number of nodes with available pods: 0 Mar 21 23:58:22.044: INFO: Node latest-worker is running more than one daemon pod Mar 21 23:58:23.257: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:23.499: INFO: Number of nodes with available pods: 0 Mar 21 23:58:23.499: INFO: Node latest-worker is running more than one daemon pod Mar 21 23:58:24.351: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:24.422: INFO: Number of nodes with available pods: 0 Mar 21 23:58:24.422: INFO: Node latest-worker is running more than one daemon pod Mar 21 23:58:24.946: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:25.087: INFO: Number of nodes with available pods: 0 Mar 21 23:58:25.087: INFO: Node latest-worker is running more than one daemon pod Mar 21 23:58:25.897: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:25.927: INFO: Number of nodes with available pods: 2 Mar 21 23:58:25.927: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 21 23:58:26.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:26.187: INFO: Number of nodes with available pods: 1 Mar 21 23:58:26.187: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:58:27.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:27.422: INFO: Number of nodes with available pods: 1 Mar 21 23:58:27.422: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:58:28.608: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:28.906: INFO: Number of nodes with available pods: 1 Mar 21 23:58:28.906: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:58:29.195: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:29.222: INFO: Number of nodes with available pods: 1 Mar 21 23:58:29.222: INFO: Node latest-worker2 is running more than one daemon pod Mar 21 23:58:30.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 21 23:58:31.063: INFO: Number of nodes with available pods: 2 Mar 21 23:58:31.063: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2683, will wait for the garbage collector to delete the pods Mar 21 23:58:33.170: INFO: Deleting DaemonSet.extensions daemon-set took: 817.398711ms Mar 21 23:58:33.970: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.523232ms Mar 21 23:59:15.674: INFO: Number of nodes with available pods: 0 Mar 21 23:59:15.674: INFO: Number of running nodes: 0, number of available pods: 0 Mar 21 23:59:15.699: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6974305"},"items":null} Mar 21 23:59:15.741: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6974307"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:59:15.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2683" for this suite. • [SLOW TEST:55.798 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":330,"completed":41,"skipped":741,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:59:16.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 21 23:59:17.427: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 21 23:59:20.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 21 23:59:22.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751967957, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 21 23:59:25.412: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:59:27.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2754" for this suite. STEP: Destroying namespace "webhook-2754-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.557 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":330,"completed":42,"skipped":790,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:59:29.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 21 23:59:29.803: INFO: Waiting up to 5m0s for pod "security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752" in namespace "security-context-9048" to be "Succeeded or Failed" Mar 21 23:59:30.230: INFO: Pod "security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752": Phase="Pending", Reason="", readiness=false. Elapsed: 427.102503ms Mar 21 23:59:32.321: INFO: Pod "security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.518185651s Mar 21 23:59:34.339: INFO: Pod "security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752": Phase="Running", Reason="", readiness=true. Elapsed: 4.535736076s Mar 21 23:59:36.417: INFO: Pod "security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.614593914s STEP: Saw pod success Mar 21 23:59:36.418: INFO: Pod "security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752" satisfied condition "Succeeded or Failed" Mar 21 23:59:36.443: INFO: Trying to get logs from node latest-worker pod security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752 container test-container: STEP: delete the pod Mar 21 23:59:36.606: INFO: Waiting for pod security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752 to disappear Mar 21 23:59:36.671: INFO: Pod security-context-41df1add-8f8c-4d9f-9442-70f9b84cd752 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:59:36.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9048" for this suite. • [SLOW TEST:7.153 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":330,"completed":43,"skipped":807,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:59:36.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 21 23:59:37.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226" in namespace "projected-7126" to be "Succeeded or Failed" Mar 21 23:59:37.138: INFO: Pod "downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226": Phase="Pending", Reason="", readiness=false. Elapsed: 63.766604ms Mar 21 23:59:39.220: INFO: Pod "downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145741357s Mar 21 23:59:41.266: INFO: Pod "downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192027735s Mar 21 23:59:43.290: INFO: Pod "downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215488924s STEP: Saw pod success Mar 21 23:59:43.290: INFO: Pod "downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226" satisfied condition "Succeeded or Failed" Mar 21 23:59:43.293: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226 container client-container: STEP: delete the pod Mar 21 23:59:43.418: INFO: Waiting for pod downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226 to disappear Mar 21 23:59:43.451: INFO: Pod downwardapi-volume-67cd9390-6558-4ce7-8aa0-cf1f7548a226 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:59:43.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7126" for this suite. • [SLOW TEST:6.716 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":330,"completed":44,"skipped":822,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:59:43.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:59:43.779: INFO: Waiting up to 5m0s for pod "busybox-user-65534-02e90516-766b-4e7f-ab77-53c8f5192406" in namespace "security-context-test-5597" to be "Succeeded or Failed" Mar 21 23:59:43.826: INFO: Pod "busybox-user-65534-02e90516-766b-4e7f-ab77-53c8f5192406": Phase="Pending", Reason="", readiness=false. Elapsed: 47.173346ms Mar 21 23:59:45.985: INFO: Pod "busybox-user-65534-02e90516-766b-4e7f-ab77-53c8f5192406": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206227063s Mar 21 23:59:48.019: INFO: Pod "busybox-user-65534-02e90516-766b-4e7f-ab77-53c8f5192406": Phase="Running", Reason="", readiness=true. Elapsed: 4.239580506s Mar 21 23:59:50.068: INFO: Pod "busybox-user-65534-02e90516-766b-4e7f-ab77-53c8f5192406": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.289225463s Mar 21 23:59:50.068: INFO: Pod "busybox-user-65534-02e90516-766b-4e7f-ab77-53c8f5192406" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:59:50.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5597" for this suite. • [SLOW TEST:6.700 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":45,"skipped":827,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:59:50.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 21 23:59:50.413: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 21 23:59:55.462: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset Mar 21 23:59:55.593: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:59:55.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3503" for this suite. • [SLOW TEST:5.762 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":330,"completed":46,"skipped":835,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:59:55.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 21 23:59:56.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b" in namespace "downward-api-4231" to be "Succeeded or Failed" Mar 21 23:59:56.252: INFO: Pod "downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b": Phase="Pending", Reason="", readiness=false. Elapsed: 53.429349ms Mar 21 23:59:58.404: INFO: Pod "downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20571803s Mar 22 00:00:00.458: INFO: Pod "downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.259613092s Mar 22 00:00:02.508: INFO: Pod "downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.309275584s STEP: Saw pod success Mar 22 00:00:02.508: INFO: Pod "downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b" satisfied condition "Succeeded or Failed" Mar 22 00:00:02.549: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b container client-container: STEP: delete the pod Mar 22 00:00:02.774: INFO: Waiting for pod downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b to disappear Mar 22 00:00:02.879: INFO: Pod downwardapi-volume-5cb2387d-1a1e-42e0-b289-ac0ff785221b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:02.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4231" for this suite. • [SLOW TEST:7.032 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":330,"completed":47,"skipped":838,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:03.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-d39b16da-ac9d-4af0-9f14-a8f182406b99 STEP: Creating a pod to test consume configMaps Mar 22 00:00:03.254: INFO: Waiting up to 5m0s for pod "pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49" in namespace "configmap-3305" to be "Succeeded or Failed" Mar 22 00:00:03.257: INFO: Pod "pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.000342ms Mar 22 00:00:05.652: INFO: Pod "pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398042474s Mar 22 00:00:07.740: INFO: Pod "pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485523557s Mar 22 00:00:09.790: INFO: Pod "pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.535673501s STEP: Saw pod success Mar 22 00:00:09.790: INFO: Pod "pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49" satisfied condition "Succeeded or Failed" Mar 22 00:00:10.004: INFO: Trying to get logs from node latest-worker pod pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49 container agnhost-container: STEP: delete the pod Mar 22 00:00:10.145: INFO: Waiting for pod pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49 to disappear Mar 22 00:00:10.209: INFO: Pod pod-configmaps-800c740d-4d5b-4e9f-88f9-74c0762edd49 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:10.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3305" for this suite. • [SLOW TEST:7.275 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":48,"skipped":888,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:10.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3122.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3122.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3122.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3122.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3122.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3122.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 00:00:21.511: INFO: DNS probes using dns-3122/dns-test-c63bd359-809f-4844-8920-8ddc195adc30 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:22.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3122" for this suite. • [SLOW TEST:12.347 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":330,"completed":49,"skipped":905,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:22.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-4ff3adb6-18be-4755-a91f-8d73227a8f7f in namespace container-probe-3281 Mar 22 00:00:30.958: INFO: Started pod liveness-4ff3adb6-18be-4755-a91f-8d73227a8f7f in namespace container-probe-3281 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:00:31.041: INFO: Initial restart count of pod liveness-4ff3adb6-18be-4755-a91f-8d73227a8f7f is 0 Mar 22 00:00:53.768: INFO: Restart count of pod container-probe-3281/liveness-4ff3adb6-18be-4755-a91f-8d73227a8f7f is now 1 (22.726717793s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:53.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3281" for this suite. • [SLOW TEST:31.591 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":330,"completed":50,"skipped":919,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:54.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token Mar 22 00:00:55.336: INFO: created pod pod-service-account-defaultsa Mar 22 00:00:55.336: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 22 00:00:55.345: INFO: created pod pod-service-account-mountsa Mar 22 00:00:55.345: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 22 00:00:55.364: INFO: created pod pod-service-account-nomountsa Mar 22 00:00:55.364: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 22 00:00:55.395: INFO: created pod pod-service-account-defaultsa-mountspec Mar 22 00:00:55.395: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 22 00:00:55.507: INFO: created pod pod-service-account-mountsa-mountspec Mar 22 00:00:55.507: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 22 00:00:55.515: INFO: created pod pod-service-account-nomountsa-mountspec Mar 22 00:00:55.515: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 22 00:00:55.563: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 22 00:00:55.564: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 22 00:00:55.645: INFO: created pod pod-service-account-mountsa-nomountspec Mar 22 00:00:55.645: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 22 00:00:55.672: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 22 00:00:55.672: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:55.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9396" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":330,"completed":51,"skipped":921,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:55.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:00:57.547: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:00.246: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:01.959: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:04.293: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:06.106: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:07.655: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:09.613: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:01:11.578: INFO: The status of Pod pod-secrets-e02b5dbc-e0db-44f9-b15b-7bffb8f25238 is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:01:11.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1737" for this suite. • [SLOW TEST:16.579 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":330,"completed":52,"skipped":951,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:01:12.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 22 00:01:13.926: INFO: Waiting up to 5m0s for pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89" in namespace "emptydir-2571" to be "Succeeded or Failed" Mar 22 00:01:14.268: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89": Phase="Pending", Reason="", readiness=false. Elapsed: 342.11405ms Mar 22 00:01:16.418: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49145862s Mar 22 00:01:18.633: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.706839302s Mar 22 00:01:21.353: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89": Phase="Pending", Reason="", readiness=false. Elapsed: 7.427290025s Mar 22 00:01:23.364: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89": Phase="Running", Reason="", readiness=true. Elapsed: 9.43780758s Mar 22 00:01:25.939: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.012536644s STEP: Saw pod success Mar 22 00:01:25.939: INFO: Pod "pod-0b74c9f5-914d-45ef-92ed-48b5df865d89" satisfied condition "Succeeded or Failed" Mar 22 00:01:25.987: INFO: Trying to get logs from node latest-worker2 pod pod-0b74c9f5-914d-45ef-92ed-48b5df865d89 container test-container: STEP: delete the pod Mar 22 00:01:26.695: INFO: Waiting for pod pod-0b74c9f5-914d-45ef-92ed-48b5df865d89 to disappear Mar 22 00:01:26.772: INFO: Pod pod-0b74c9f5-914d-45ef-92ed-48b5df865d89 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:01:26.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2571" for this suite. • [SLOW TEST:14.527 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":53,"skipped":980,"failed":1,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]"]} SSSS ------------------------------ [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:01:26.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c in namespace container-probe-9024 Mar 22 00:01:33.766: INFO: Started pod liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c in namespace container-probe-9024 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:01:33.974: INFO: Initial restart count of pod liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c is 0 Mar 22 00:01:46.976: FAIL: getting pod Unexpected error: <*errors.StatusError | 0xc002e774a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c\" not found", Reason: "NotFound", Details: { Name: "liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c" not found occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000190dc0, 0xc000d08800, 0x5, 0x45d964b800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:607 +0xbaa k8s.io/kubernetes/test/e2e/common/node.glob..func2.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:192 +0x156 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-9024". STEP: Found 7 events. Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:27 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {default-scheduler } Scheduled: Successfully assigned container-probe-9024/liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c to latest-worker2 Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:29 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:31 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {kubelet latest-worker2} Created: Created container agnhost-container Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:31 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {kubelet latest-worker2} Started: Started container agnhost-container Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:41 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {kubelet latest-worker2} Unhealthy: Liveness probe failed: HTTP probe failed with statuscode: 500 Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:41 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {kubelet latest-worker2} Killing: Container agnhost-container failed liveness probe, will be restarted Mar 22 00:01:47.607: INFO: At 2021-03-22 00:01:45 +0000 UTC - event for liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c: {taint-controller } TaintManagerEviction: Marking for deletion Pod container-probe-9024/liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c Mar 22 00:01:48.333: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:01:48.334: INFO: Mar 22 00:01:48.589: INFO: Logging node info for node latest-control-plane Mar 22 00:01:49.452: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6974772 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:59:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:49.453: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:01:49.851: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:01:51.001: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container etcd ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:51.001: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container coredns ready: false, restart count 0 Mar 22 00:01:51.001: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:01:51.001: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:51.001: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container coredns ready: false, restart count 0 Mar 22 00:01:51.001: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:51.001: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:01:51.796547 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:52.090: INFO: Latency metrics for node latest-control-plane Mar 22 00:01:52.090: INFO: Logging node info for node latest-worker Mar 22 00:01:53.275: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6977856 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:46 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:00:53 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:53.276: INFO: Logging kubelet events for node latest-worker Mar 22 00:01:53.547: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:01:53.968: INFO: kindnet-g99fx started at 2021-03-21 23:50:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:53.968: INFO: pod-service-account-mountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:53.968: INFO: coredns-74ff55c5b-9sxfg started at 2021-03-21 23:57:22 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container coredns ready: true, restart count 0 Mar 22 00:01:53.968: INFO: taint-eviction-a2 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container pause ready: false, restart count 0 Mar 22 00:01:53.968: INFO: pod-service-account-nomountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:53.968: INFO: chaos-daemon-jxjgk started at 2021-03-21 23:50:17 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:01:53.968: INFO: pod-50c28aff-8fa5-4eed-8f2e-26ae9fc01ff3 started at 2021-03-22 00:01:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:01:53.968: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:53.968: INFO: pod-04f832cb-d18a-4ca9-b5d7-4ff13a82c1a4 started at 2021-03-22 00:01:29 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:53.968: INFO: Container write-pod ready: false, restart count 0 W0322 00:01:54.260249 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:54.750: INFO: Latency metrics for node latest-worker Mar 22 00:01:54.750: INFO: Logging node info for node latest-worker2 Mar 22 00:01:54.826: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6977837 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 00:01:45 +0000 UTC FieldsV1 {"f:spec":{"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{Taint{Key:kubernetes.io/e2e-evict-taint-key,Value:evictTaintVal,Effect:NoExecute,TimeAdded:2021-03-22 00:01:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:58:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:01:54.827: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:01:55.111: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:01:55.463: INFO: pod-service-account-mountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.463: INFO: taint-eviction-a1 started at 2021-03-22 00:01:45 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container pause ready: false, restart count 0 Mar 22 00:01:55.463: INFO: kindnet-gp4fv started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:01:55.463: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:01:55.463: INFO: chaos-daemon-95pmt started at 2021-03-21 23:47:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:01:55.463: INFO: pod-service-account-defaultsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.463: INFO: chaos-controller-manager-69c479c674-k8l6r started at 2021-03-21 23:49:35 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:01:55.463: INFO: pod-service-account-nomountsa started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.463: INFO: pod-service-account-nomountsa-nomountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container token-test ready: false, restart count 0 Mar 22 00:01:55.463: INFO: coredns-74ff55c5b-q4csd started at 2021-03-21 23:57:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container coredns ready: true, restart count 0 Mar 22 00:01:55.463: INFO: pod-service-account-mountsa-mountspec started at 2021-03-22 00:00:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:01:55.463: INFO: Container token-test ready: false, restart count 0 W0322 00:01:55.893802 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:01:56.928: INFO: Latency metrics for node latest-worker2 Mar 22 00:01:56.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9024" for this suite. • Failure [30.497 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:01:46.976: getting pod Unexpected error: <*errors.StatusError | 0xc002e774a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "pods \"liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c\" not found", Reason: "NotFound", Details: { Name: "liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c", Group: "", Kind: "pods", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 404, }, } pods "liveness-1d69dde1-c85e-44c0-9f95-c66d15be754c" not found occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:607 ------------------------------ {"msg":"FAILED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":330,"completed":53,"skipped":984,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:01:57.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-8263fe5d-a6ed-427b-96c7-0015e588521a STEP: Creating a pod to test consume configMaps Mar 22 00:01:58.648: INFO: Waiting up to 5m0s for pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a" in namespace "configmap-8485" to be "Succeeded or Failed" Mar 22 00:01:58.722: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 74.266968ms Mar 22 00:02:00.856: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20791186s Mar 22 00:02:02.879: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2310363s Mar 22 00:02:04.926: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278184787s Mar 22 00:02:06.975: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326335095s Mar 22 00:02:09.070: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.422103895s Mar 22 00:02:11.110: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.462319414s Mar 22 00:02:13.119: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.471183374s Mar 22 00:02:15.130: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.481892721s Mar 22 00:02:17.191: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.542561113s Mar 22 00:02:19.227: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.578444506s Mar 22 00:02:21.305: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.656539354s Mar 22 00:02:23.311: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.662423735s Mar 22 00:02:25.350: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.701595186s Mar 22 00:02:27.466: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.817845872s Mar 22 00:02:29.489: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.840370238s Mar 22 00:02:31.555: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.906953181s Mar 22 00:02:33.560: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.91172965s Mar 22 00:02:35.588: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.939757363s Mar 22 00:02:37.592: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.943671158s Mar 22 00:02:39.601: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.953225699s Mar 22 00:02:41.672: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 43.023479603s Mar 22 00:02:43.715: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.06701282s Mar 22 00:02:45.732: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.084102632s Mar 22 00:02:47.769: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 49.120930896s Mar 22 00:02:49.818: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.170324513s Mar 22 00:02:51.861: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.212696163s Mar 22 00:02:53.909: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.260942316s Mar 22 00:02:56.300: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.651345721s Mar 22 00:02:58.870: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.221979579s Mar 22 00:03:01.216: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.567482738s Mar 22 00:03:04.255: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.607152508s Mar 22 00:03:06.886: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.237771435s Mar 22 00:03:08.892: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.243389681s STEP: Saw pod success Mar 22 00:03:08.892: INFO: Pod "pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a" satisfied condition "Succeeded or Failed" Mar 22 00:03:08.919: INFO: Trying to get logs from node latest-worker pod pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a container agnhost-container: STEP: delete the pod Mar 22 00:03:09.115: INFO: Waiting for pod pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a to disappear Mar 22 00:03:09.119: INFO: Pod pod-configmaps-40d2a62f-ea68-4961-a66d-9ab75834765a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:09.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8485" for this suite. • [SLOW TEST:71.792 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":54,"skipped":1017,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:09.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:03:09.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089" in namespace "projected-7166" to be "Succeeded or Failed" Mar 22 00:03:09.703: INFO: Pod "downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089": Phase="Pending", Reason="", readiness=false. Elapsed: 31.619951ms Mar 22 00:03:11.893: INFO: Pod "downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221804514s Mar 22 00:03:13.947: INFO: Pod "downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089": Phase="Running", Reason="", readiness=true. Elapsed: 4.276063106s Mar 22 00:03:16.169: INFO: Pod "downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.498459781s STEP: Saw pod success Mar 22 00:03:16.169: INFO: Pod "downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089" satisfied condition "Succeeded or Failed" Mar 22 00:03:16.242: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089 container client-container: STEP: delete the pod Mar 22 00:03:16.820: INFO: Waiting for pod downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089 to disappear Mar 22 00:03:16.877: INFO: Pod downwardapi-volume-ca5f9775-82bd-4c10-a258-3548359f6089 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:16.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7166" for this suite. • [SLOW TEST:7.786 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":330,"completed":55,"skipped":1021,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:17.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:03:17.481: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6241c9d3-9342-4411-951a-8529b43d6105" in namespace "security-context-test-999" to be "Succeeded or Failed" Mar 22 00:03:17.517: INFO: Pod "alpine-nnp-false-6241c9d3-9342-4411-951a-8529b43d6105": Phase="Pending", Reason="", readiness=false. Elapsed: 35.557142ms Mar 22 00:03:19.577: INFO: Pod "alpine-nnp-false-6241c9d3-9342-4411-951a-8529b43d6105": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095711236s Mar 22 00:03:21.658: INFO: Pod "alpine-nnp-false-6241c9d3-9342-4411-951a-8529b43d6105": Phase="Running", Reason="", readiness=true. Elapsed: 4.177311251s Mar 22 00:03:23.730: INFO: Pod "alpine-nnp-false-6241c9d3-9342-4411-951a-8529b43d6105": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249342838s Mar 22 00:03:23.731: INFO: Pod "alpine-nnp-false-6241c9d3-9342-4411-951a-8529b43d6105" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-999" for this suite. • [SLOW TEST:6.848 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":56,"skipped":1026,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:23.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 22 00:03:24.453: INFO: Waiting up to 5m0s for pod "pod-37c43b7c-a72d-4d7f-b818-6f41677125d2" in namespace "emptydir-5767" to be "Succeeded or Failed" Mar 22 00:03:24.531: INFO: Pod "pod-37c43b7c-a72d-4d7f-b818-6f41677125d2": Phase="Pending", Reason="", readiness=false. Elapsed: 77.482494ms Mar 22 00:03:26.666: INFO: Pod "pod-37c43b7c-a72d-4d7f-b818-6f41677125d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212350784s Mar 22 00:03:28.911: INFO: Pod "pod-37c43b7c-a72d-4d7f-b818-6f41677125d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.45781312s Mar 22 00:03:30.952: INFO: Pod "pod-37c43b7c-a72d-4d7f-b818-6f41677125d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.498957596s STEP: Saw pod success Mar 22 00:03:30.952: INFO: Pod "pod-37c43b7c-a72d-4d7f-b818-6f41677125d2" satisfied condition "Succeeded or Failed" Mar 22 00:03:30.981: INFO: Trying to get logs from node latest-worker pod pod-37c43b7c-a72d-4d7f-b818-6f41677125d2 container test-container: STEP: delete the pod Mar 22 00:03:31.215: INFO: Waiting for pod pod-37c43b7c-a72d-4d7f-b818-6f41677125d2 to disappear Mar 22 00:03:31.238: INFO: Pod pod-37c43b7c-a72d-4d7f-b818-6f41677125d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:03:31.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5767" for this suite. • [SLOW TEST:7.455 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":57,"skipped":1030,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:03:31.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:03:31.626: INFO: created pod Mar 22 00:03:31.626: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3506" to be "Succeeded or Failed" Mar 22 00:03:31.666: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 40.72159ms Mar 22 00:03:33.905: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279553286s Mar 22 00:03:36.194: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567978578s Mar 22 00:03:38.361: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.735104218s STEP: Saw pod success Mar 22 00:03:38.361: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Mar 22 00:04:08.361: INFO: polling logs Mar 22 00:04:08.417: INFO: Pod logs: 2021/03/22 00:03:36 OK: Got token 2021/03/22 00:03:36 OK: got issuer https://kubernetes.default.svc.cluster.local 2021/03/22 00:03:36 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3506:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1616372011, NotBefore:1616371411, IssuedAt:1616371411, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3506", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"9a4ff3d0-b61b-4a30-b574-421a8863c03a"}}} 2021/03/22 00:03:36 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2021/03/22 00:03:36 OK: Validated signature on JWT 2021/03/22 00:03:36 OK: Got valid claims from token! 2021/03/22 00:03:36 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3506:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1616372011, NotBefore:1616371411, IssuedAt:1616371411, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3506", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"9a4ff3d0-b61b-4a30-b574-421a8863c03a"}}} Mar 22 00:04:08.417: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:08.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3506" for this suite. • [SLOW TEST:37.763 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":330,"completed":58,"skipped":1037,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:09.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 00:04:15.139: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:15.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4852" for this suite. • [SLOW TEST:6.554 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":330,"completed":59,"skipped":1065,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:15.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Mar 22 00:04:15.875: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:16.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6329" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":330,"completed":60,"skipped":1067,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:16.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium Mar 22 00:04:16.483: INFO: Waiting up to 5m0s for pod "pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9" in namespace "emptydir-4786" to be "Succeeded or Failed" Mar 22 00:04:16.534: INFO: Pod "pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 50.60612ms Mar 22 00:04:18.582: INFO: Pod "pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098714645s Mar 22 00:04:20.653: INFO: Pod "pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9": Phase="Running", Reason="", readiness=true. Elapsed: 4.170465756s Mar 22 00:04:22.737: INFO: Pod "pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25379211s STEP: Saw pod success Mar 22 00:04:22.737: INFO: Pod "pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9" satisfied condition "Succeeded or Failed" Mar 22 00:04:22.788: INFO: Trying to get logs from node latest-worker2 pod pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9 container test-container: STEP: delete the pod Mar 22 00:04:23.002: INFO: Waiting for pod pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9 to disappear Mar 22 00:04:23.059: INFO: Pod pod-fc3cd95f-4296-4bb7-9428-882685f1b6b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:23.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4786" for this suite. • [SLOW TEST:6.848 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":61,"skipped":1072,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSS ------------------------------ [sig-node] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:23.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:23.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6656" for this suite. •{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":330,"completed":62,"skipped":1075,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:24.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:04:24.279: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0" in namespace "projected-200" to be "Succeeded or Failed" Mar 22 00:04:24.391: INFO: Pod "downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0": Phase="Pending", Reason="", readiness=false. Elapsed: 111.295801ms Mar 22 00:04:26.511: INFO: Pod "downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23135539s Mar 22 00:04:28.526: INFO: Pod "downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0": Phase="Running", Reason="", readiness=true. Elapsed: 4.24657431s Mar 22 00:04:30.542: INFO: Pod "downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.263066598s STEP: Saw pod success Mar 22 00:04:30.542: INFO: Pod "downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0" satisfied condition "Succeeded or Failed" Mar 22 00:04:30.544: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0 container client-container: STEP: delete the pod Mar 22 00:04:30.803: INFO: Waiting for pod downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0 to disappear Mar 22 00:04:30.898: INFO: Pod downwardapi-volume-88a0ba13-ab38-4776-80ed-1491168ec9e0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:30.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-200" for this suite. • [SLOW TEST:7.076 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":63,"skipped":1082,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} S ------------------------------ [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:31.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:35.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1771" for this suite. •{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":330,"completed":64,"skipped":1083,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:35.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:04:36.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1238" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":330,"completed":65,"skipped":1118,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:04:36.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-wxt5 STEP: Creating a pod to test atomic-volume-subpath Mar 22 00:04:36.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wxt5" in namespace "subpath-90" to be "Succeeded or Failed" Mar 22 00:04:36.609: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Pending", Reason="", readiness=false. Elapsed: 80.248673ms Mar 22 00:04:38.906: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376883103s Mar 22 00:04:40.912: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 4.383248962s Mar 22 00:04:43.005: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 6.475909767s Mar 22 00:04:45.145: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 8.615925809s Mar 22 00:04:47.174: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 10.64528746s Mar 22 00:04:49.184: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 12.654775623s Mar 22 00:04:51.327: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 14.797577848s Mar 22 00:04:53.492: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 16.962610302s Mar 22 00:04:55.614: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 19.085231675s Mar 22 00:04:57.625: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 21.09587431s Mar 22 00:04:59.650: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Running", Reason="", readiness=true. Elapsed: 23.120544991s Mar 22 00:05:01.801: INFO: Pod "pod-subpath-test-configmap-wxt5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.271536887s STEP: Saw pod success Mar 22 00:05:01.801: INFO: Pod "pod-subpath-test-configmap-wxt5" satisfied condition "Succeeded or Failed" Mar 22 00:05:01.866: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-wxt5 container test-container-subpath-configmap-wxt5: STEP: delete the pod Mar 22 00:05:02.004: INFO: Waiting for pod pod-subpath-test-configmap-wxt5 to disappear Mar 22 00:05:02.022: INFO: Pod pod-subpath-test-configmap-wxt5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-wxt5 Mar 22 00:05:02.022: INFO: Deleting pod "pod-subpath-test-configmap-wxt5" in namespace "subpath-90" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:05:02.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-90" for this suite. • [SLOW TEST:25.961 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":330,"completed":66,"skipped":1123,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:05:02.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:05:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7946" for this suite. • [SLOW TEST:13.143 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":330,"completed":67,"skipped":1156,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:05:15.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 22 00:05:15.654: INFO: Waiting up to 5m0s for pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517" in namespace "downward-api-712" to be "Succeeded or Failed" Mar 22 00:05:15.741: INFO: Pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517": Phase="Pending", Reason="", readiness=false. Elapsed: 86.570241ms Mar 22 00:05:18.009: INFO: Pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.354341828s Mar 22 00:05:20.207: INFO: Pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552167198s Mar 22 00:05:22.226: INFO: Pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517": Phase="Running", Reason="", readiness=true. Elapsed: 6.571533567s Mar 22 00:05:24.247: INFO: Pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.59250393s STEP: Saw pod success Mar 22 00:05:24.247: INFO: Pod "downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517" satisfied condition "Succeeded or Failed" Mar 22 00:05:24.270: INFO: Trying to get logs from node latest-worker2 pod downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517 container dapi-container: STEP: delete the pod Mar 22 00:05:24.483: INFO: Waiting for pod downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517 to disappear Mar 22 00:05:24.539: INFO: Pod downward-api-808f64c0-0a58-4889-8c95-0b1b1575d517 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:05:24.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-712" for this suite. • [SLOW TEST:9.322 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":330,"completed":68,"skipped":1164,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:05:24.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-32615099-e569-4629-b49a-c0dd64fcc075 STEP: Creating a pod to test consume secrets Mar 22 00:05:24.836: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2" in namespace "projected-4635" to be "Succeeded or Failed" Mar 22 00:05:24.907: INFO: Pod "pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 71.449238ms Mar 22 00:05:27.459: INFO: Pod "pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.623116533s Mar 22 00:05:29.472: INFO: Pod "pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.636071029s STEP: Saw pod success Mar 22 00:05:29.472: INFO: Pod "pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2" satisfied condition "Succeeded or Failed" Mar 22 00:05:29.478: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2 container secret-volume-test: STEP: delete the pod Mar 22 00:05:29.964: INFO: Waiting for pod pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2 to disappear Mar 22 00:05:29.969: INFO: Pod pod-projected-secrets-dc156085-2a46-490f-8473-df1c80578eb2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:05:29.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4635" for this suite. • [SLOW TEST:5.535 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":330,"completed":69,"skipped":1181,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SS ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:05:30.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 22 00:05:30.518: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:05:32.601: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:05:34.529: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 22 00:05:34.864: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:05:37.266: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:05:38.891: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:05:40.894: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Mar 22 00:05:40.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:41.026: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:43.026: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:43.194: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:45.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:45.207: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:47.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:47.062: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:49.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:49.070: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:51.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:51.059: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:53.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:53.086: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:55.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:55.274: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:57.026: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:57.068: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:05:59.028: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:05:59.105: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:01.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:01.080: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:03.026: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:03.128: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:05.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:05.127: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:07.026: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:07.146: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:09.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:09.074: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:11.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:11.046: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:13.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:13.035: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:15.027: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:15.087: INFO: Pod pod-with-prestop-exec-hook still exists Mar 22 00:06:17.026: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 22 00:06:17.081: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:06:17.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9234" for this suite. • [SLOW TEST:47.135 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":330,"completed":70,"skipped":1183,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:06:17.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:06:17.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b" in namespace "projected-1673" to be "Succeeded or Failed" Mar 22 00:06:17.634: INFO: Pod "downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b": Phase="Pending", Reason="", readiness=false. Elapsed: 59.186333ms Mar 22 00:06:20.070: INFO: Pod "downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49589485s Mar 22 00:06:22.383: INFO: Pod "downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.808573861s STEP: Saw pod success Mar 22 00:06:22.383: INFO: Pod "downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b" satisfied condition "Succeeded or Failed" Mar 22 00:06:22.504: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b container client-container: STEP: delete the pod Mar 22 00:06:22.763: INFO: Waiting for pod downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b to disappear Mar 22 00:06:22.774: INFO: Pod downwardapi-volume-3a22533b-a04f-4ef0-afa4-f0532142839b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:06:22.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1673" for this suite. • [SLOW TEST:5.635 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":330,"completed":71,"skipped":1185,"failed":2,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:06:22.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob Mar 22 00:06:23.225: FAIL: Failed to create CronJob in namespace cronjob-5765 Unexpected error: <*errors.StatusError | 0xc0029a3180>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:168 +0x1f1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-5765". STEP: Found 0 events. Mar 22 00:06:23.313: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:06:23.313: INFO: Mar 22 00:06:23.326: INFO: Logging node info for node latest-control-plane Mar 22 00:06:23.395: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6982636 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:06:23.395: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:06:23.445: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:06:23.540: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container etcd ready: true, restart count 0 Mar 22 00:06:23.540: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:06:23.540: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container coredns ready: true, restart count 0 Mar 22 00:06:23.540: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:06:23.540: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container coredns ready: true, restart count 0 Mar 22 00:06:23.540: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:06:23.540: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:06:23.540: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:06:23.540: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.540: INFO: Container kube-apiserver ready: true, restart count 0 W0322 00:06:23.616482 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:06:23.817: INFO: Latency metrics for node latest-control-plane Mar 22 00:06:23.817: INFO: Logging node info for node latest-worker Mar 22 00:06:23.835: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6985124 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:06:23.835: INFO: Logging kubelet events for node latest-worker Mar 22 00:06:23.863: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:06:23.913: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:06:23.913: INFO: netserver-0 started at 2021-03-22 00:05:15 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container webserver ready: false, restart count 0 Mar 22 00:06:23.913: INFO: test-container-pod started at 2021-03-22 00:06:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container webserver ready: true, restart count 0 Mar 22 00:06:23.913: INFO: netserver-0 started at 2021-03-22 00:05:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container webserver ready: true, restart count 0 Mar 22 00:06:23.913: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:06:23.913: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:06:23.913: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:23.913: INFO: Container chaos-daemon ready: true, restart count 0 W0322 00:06:23.984801 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:06:24.338: INFO: Latency metrics for node latest-worker Mar 22 00:06:24.338: INFO: Logging node info for node latest-worker2 Mar 22 00:06:24.399: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6984897 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-877":"csi-mock-csi-mock-volumes-877","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:06:24.400: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:06:24.452: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:06:24.477: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:24.477: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:06:24.477: INFO: pod-handle-http-request started at 2021-03-22 00:05:30 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:24.477: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:06:24.477: INFO: back-off-cap started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:24.477: INFO: Container back-off-cap ready: false, restart count 4 Mar 22 00:06:24.477: INFO: netserver-1 started at 2021-03-22 00:05:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:24.477: INFO: Container webserver ready: true, restart count 0 Mar 22 00:06:24.477: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:24.477: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:06:24.477: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:24.477: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:06:24.477: INFO: csi-mockplugin-0 started at 2021-03-22 00:05:40 +0000 UTC (0+3 container statuses recorded) Mar 22 00:06:24.477: INFO: Container csi-provisioner ready: true, restart count 0 Mar 22 00:06:24.477: INFO: Container driver-registrar ready: true, restart count 0 Mar 22 00:06:24.477: INFO: Container mock ready: true, restart count 0 W0322 00:06:24.533546 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:06:24.875: INFO: Latency metrics for node latest-worker2 Mar 22 00:06:24.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5765" for this suite. • Failure [2.226 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:06:23.225: Failed to create CronJob in namespace cronjob-5765 Unexpected error: <*errors.StatusError | 0xc0029a3180>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:168 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":330,"completed":71,"skipped":1224,"failed":3,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:06:25.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-wplb STEP: Creating a pod to test atomic-volume-subpath Mar 22 00:06:25.374: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wplb" in namespace "subpath-9038" to be "Succeeded or Failed" Mar 22 00:06:25.411: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.975828ms Mar 22 00:06:27.465: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0915517s Mar 22 00:06:29.689: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315438845s Mar 22 00:06:31.717: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 6.343240704s Mar 22 00:06:33.729: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 8.355221576s Mar 22 00:06:35.781: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 10.407362969s Mar 22 00:06:37.786: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 12.412484893s Mar 22 00:06:39.791: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 14.417419385s Mar 22 00:06:41.797: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 16.422661146s Mar 22 00:06:43.802: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 18.427740766s Mar 22 00:06:45.806: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 20.432248724s Mar 22 00:06:47.812: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 22.437863026s Mar 22 00:06:49.816: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Running", Reason="", readiness=true. Elapsed: 24.442209262s Mar 22 00:06:51.820: INFO: Pod "pod-subpath-test-secret-wplb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.445992518s STEP: Saw pod success Mar 22 00:06:51.820: INFO: Pod "pod-subpath-test-secret-wplb" satisfied condition "Succeeded or Failed" Mar 22 00:06:51.823: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-wplb container test-container-subpath-secret-wplb: STEP: delete the pod Mar 22 00:06:51.886: INFO: Waiting for pod pod-subpath-test-secret-wplb to disappear Mar 22 00:06:51.892: INFO: Pod pod-subpath-test-secret-wplb no longer exists STEP: Deleting pod pod-subpath-test-secret-wplb Mar 22 00:06:51.892: INFO: Deleting pod "pod-subpath-test-secret-wplb" in namespace "subpath-9038" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:06:51.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9038" for this suite. • [SLOW TEST:26.756 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":330,"completed":72,"skipped":1247,"failed":3,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:06:51.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 22 00:06:52.039: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6635 bbae0866-f6a7-4c02-8bbd-f79d1b0ed8bf 6986392 0 2021-03-22 00:06:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-22 00:06:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 00:06:52.039: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6635 bbae0866-f6a7-4c02-8bbd-f79d1b0ed8bf 6986393 0 2021-03-22 00:06:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-22 00:06:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 22 00:06:52.051: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6635 bbae0866-f6a7-4c02-8bbd-f79d1b0ed8bf 6986394 0 2021-03-22 00:06:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-22 00:06:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 00:06:52.051: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6635 bbae0866-f6a7-4c02-8bbd-f79d1b0ed8bf 6986395 0 2021-03-22 00:06:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-03-22 00:06:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:06:52.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6635" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":330,"completed":73,"skipped":1253,"failed":3,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]"]} SS ------------------------------ [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:06:52.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob Mar 22 00:06:52.205: FAIL: Failed to create CronJob in namespace cronjob-8407 Unexpected error: <*errors.StatusError | 0xc001f06960>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:77 +0x1f1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-8407". STEP: Found 0 events. Mar 22 00:06:52.212: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:06:52.212: INFO: Mar 22 00:06:52.216: INFO: Logging node info for node latest-control-plane Mar 22 00:06:52.219: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6982636 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:04:33 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:06:52.220: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:06:52.228: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:06:52.240: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container etcd ready: true, restart count 0 Mar 22 00:06:52.240: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:06:52.240: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container coredns ready: true, restart count 0 Mar 22 00:06:52.240: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:06:52.240: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:06:52.240: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:06:52.240: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:06:52.240: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:06:52.240: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.240: INFO: Container coredns ready: true, restart count 0 W0322 00:06:52.247439 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:06:52.363: INFO: Latency metrics for node latest-control-plane Mar 22 00:06:52.363: INFO: Logging node info for node latest-worker Mar 22 00:06:52.368: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6985124 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:45:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:45:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:05:54 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:06:52.369: INFO: Logging kubelet events for node latest-worker Mar 22 00:06:52.378: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:06:52.385: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.385: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:06:52.385: INFO: test-container-pod started at 2021-03-22 00:06:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.385: INFO: Container webserver ready: true, restart count 0 Mar 22 00:06:52.385: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.385: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:06:52.385: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.385: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:06:52.385: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.385: INFO: Container chaos-mesh ready: true, restart count 0 W0322 00:06:52.391140 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:06:52.603: INFO: Latency metrics for node latest-worker Mar 22 00:06:52.603: INFO: Logging node info for node latest-worker2 Mar 22 00:06:52.607: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6986301 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:58:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:06:52.608: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:06:52.616: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:06:52.626: INFO: back-off-cap started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.626: INFO: Container back-off-cap ready: false, restart count 5 Mar 22 00:06:52.626: INFO: netserver-1 started at 2021-03-22 00:05:55 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.626: INFO: Container webserver ready: true, restart count 0 Mar 22 00:06:52.626: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.626: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:06:52.626: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.626: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:06:52.626: INFO: csi-mockplugin-0 started at 2021-03-22 00:05:40 +0000 UTC (0+3 container statuses recorded) Mar 22 00:06:52.626: INFO: Container csi-provisioner ready: false, restart count 0 Mar 22 00:06:52.626: INFO: Container driver-registrar ready: false, restart count 0 Mar 22 00:06:52.626: INFO: Container mock ready: false, restart count 0 Mar 22 00:06:52.626: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:06:52.626: INFO: Container chaos-daemon ready: true, restart count 0 W0322 00:06:52.633042 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:06:52.859: INFO: Latency metrics for node latest-worker2 Mar 22 00:06:52.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-8407" for this suite. • Failure [0.784 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:06:52.205: Failed to create CronJob in namespace cronjob-8407 Unexpected error: <*errors.StatusError | 0xc001f06960>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:77 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":330,"completed":73,"skipped":1255,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:06:52.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2979 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2979 STEP: creating replication controller externalsvc in namespace services-2979 I0322 00:06:53.221316 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2979, replica count: 2 I0322 00:06:56.272688 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:06:59.273889 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 22 00:06:59.363: INFO: Creating new exec pod Mar 22 00:07:03.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-2979 exec execpodkfbbk -- /bin/sh -x -c nslookup nodeport-service.services-2979.svc.cluster.local' Mar 22 00:07:07.142: INFO: stderr: "+ nslookup nodeport-service.services-2979.svc.cluster.local\n" Mar 22 00:07:07.142: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2979.svc.cluster.local\tcanonical name = externalsvc.services-2979.svc.cluster.local.\nName:\texternalsvc.services-2979.svc.cluster.local\nAddress: 10.96.90.70\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2979, will wait for the garbage collector to delete the pods Mar 22 00:07:07.203: INFO: Deleting ReplicationController externalsvc took: 7.534978ms Mar 22 00:07:07.804: INFO: Terminating ReplicationController externalsvc pods took: 600.924142ms Mar 22 00:07:35.494: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:35.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2979" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:42.861 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":330,"completed":74,"skipped":1259,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:35.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server Mar 22 00:07:35.870: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7474 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:35.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7474" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":330,"completed":75,"skipped":1276,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:36.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 22 00:07:36.164: INFO: Waiting up to 5m0s for pod "pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a" in namespace "emptydir-6530" to be "Succeeded or Failed" Mar 22 00:07:36.167: INFO: Pod "pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882446ms Mar 22 00:07:38.465: INFO: Pod "pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300667415s Mar 22 00:07:40.529: INFO: Pod "pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364729182s Mar 22 00:07:42.621: INFO: Pod "pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.4559855s STEP: Saw pod success Mar 22 00:07:42.621: INFO: Pod "pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a" satisfied condition "Succeeded or Failed" Mar 22 00:07:42.624: INFO: Trying to get logs from node latest-worker pod pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a container test-container: STEP: delete the pod Mar 22 00:07:42.843: INFO: Waiting for pod pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a to disappear Mar 22 00:07:42.881: INFO: Pod pod-4b7b09b9-8b57-4132-a704-ed84cad10d5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:42.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6530" for this suite. • [SLOW TEST:6.932 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":76,"skipped":1288,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:42.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:43.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5943" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":330,"completed":77,"skipped":1295,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:43.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-e8160aac-42e2-4d39-82c8-19e888b3a9d4 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:07:54.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-809" for this suite. • [SLOW TEST:10.277 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":78,"skipped":1299,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:07:54.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-991a4a38-cd73-498c-a05c-7286685bbf76 STEP: Creating secret with name s-test-opt-upd-6f1f2a3a-fd90-45e7-a0f3-f8359da3a860 STEP: Creating the pod Mar 22 00:07:54.903: INFO: The status of Pod pod-projected-secrets-1aa12e05-0eb0-4cff-b8b4-8a196eb475eb is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:07:57.316: INFO: The status of Pod pod-projected-secrets-1aa12e05-0eb0-4cff-b8b4-8a196eb475eb is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:07:58.915: INFO: The status of Pod pod-projected-secrets-1aa12e05-0eb0-4cff-b8b4-8a196eb475eb is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:08:01.142: INFO: The status of Pod pod-projected-secrets-1aa12e05-0eb0-4cff-b8b4-8a196eb475eb is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:08:03.059: INFO: The status of Pod pod-projected-secrets-1aa12e05-0eb0-4cff-b8b4-8a196eb475eb is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:08:04.910: INFO: The status of Pod pod-projected-secrets-1aa12e05-0eb0-4cff-b8b4-8a196eb475eb is Running (Ready = true) STEP: Deleting secret s-test-opt-del-991a4a38-cd73-498c-a05c-7286685bbf76 STEP: Updating secret s-test-opt-upd-6f1f2a3a-fd90-45e7-a0f3-f8359da3a860 STEP: Creating secret with name s-test-opt-create-2f565bcc-eba5-4c48-b17c-77762312a3a5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:08:06.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-854" for this suite. • [SLOW TEST:12.874 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":79,"skipped":1326,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:08:06.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-5761c9b2-f577-472e-ab6d-159a5a6a36d4 STEP: Creating a pod to test consume secrets Mar 22 00:08:07.106: INFO: Waiting up to 5m0s for pod "pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41" in namespace "secrets-6350" to be "Succeeded or Failed" Mar 22 00:08:07.110: INFO: Pod "pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41": Phase="Pending", Reason="", readiness=false. Elapsed: 3.955158ms Mar 22 00:08:09.490: INFO: Pod "pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384647842s Mar 22 00:08:11.497: INFO: Pod "pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.390955802s STEP: Saw pod success Mar 22 00:08:11.497: INFO: Pod "pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41" satisfied condition "Succeeded or Failed" Mar 22 00:08:11.500: INFO: Trying to get logs from node latest-worker pod pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41 container secret-volume-test: STEP: delete the pod Mar 22 00:08:11.545: INFO: Waiting for pod pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41 to disappear Mar 22 00:08:11.597: INFO: Pod pod-secrets-92a4c54c-c4ff-4fc8-a968-a421cd571f41 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:08:11.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6350" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":80,"skipped":1328,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:08:11.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 22 00:08:11.738: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Mar 22 00:08:11.742: INFO: starting watch STEP: patching STEP: updating Mar 22 00:08:11.767: INFO: waiting for watch events with expected annotations Mar 22 00:08:11.767: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:08:11.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-968" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":330,"completed":81,"skipped":1328,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:08:11.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 22 00:08:12.018: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:08:22.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9833" for this suite. • [SLOW TEST:10.281 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":330,"completed":82,"skipped":1356,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:08:22.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:08:22.289: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 22 00:08:25.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3366 --namespace=crd-publish-openapi-3366 create -f -' Mar 22 00:08:34.988: INFO: stderr: "" Mar 22 00:08:34.988: INFO: stdout: "e2e-test-crd-publish-openapi-908-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 22 00:08:34.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3366 --namespace=crd-publish-openapi-3366 delete e2e-test-crd-publish-openapi-908-crds test-cr' Mar 22 00:08:35.212: INFO: stderr: "" Mar 22 00:08:35.212: INFO: stdout: "e2e-test-crd-publish-openapi-908-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 22 00:08:35.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3366 --namespace=crd-publish-openapi-3366 apply -f -' Mar 22 00:08:35.692: INFO: stderr: "" Mar 22 00:08:35.692: INFO: stdout: "e2e-test-crd-publish-openapi-908-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 22 00:08:35.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3366 --namespace=crd-publish-openapi-3366 delete e2e-test-crd-publish-openapi-908-crds test-cr' Mar 22 00:08:35.805: INFO: stderr: "" Mar 22 00:08:35.805: INFO: stdout: "e2e-test-crd-publish-openapi-908-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 22 00:08:35.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3366 explain e2e-test-crd-publish-openapi-908-crds' Mar 22 00:08:36.157: INFO: stderr: "" Mar 22 00:08:36.157: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-908-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:08:39.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3366" for this suite. • [SLOW TEST:17.497 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":330,"completed":83,"skipped":1374,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:08:39.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2323.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2323.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2323.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2323.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2323.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2323.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 00:08:48.293: INFO: DNS probes using dns-2323/dns-test-841bb204-46f7-461d-b178-346678f980ab succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:08:48.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2323" for this suite. • [SLOW TEST:9.379 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":330,"completed":84,"skipped":1377,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:08:49.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 22 00:09:00.243: INFO: Successfully updated pod "adopt-release-f4c4g" STEP: Checking that the Job readopts the Pod Mar 22 00:09:00.243: INFO: Waiting up to 15m0s for pod "adopt-release-f4c4g" in namespace "job-9063" to be "adopted" Mar 22 00:09:00.251: INFO: Pod "adopt-release-f4c4g": Phase="Running", Reason="", readiness=true. Elapsed: 8.405897ms Mar 22 00:09:02.258: INFO: Pod "adopt-release-f4c4g": Phase="Running", Reason="", readiness=true. Elapsed: 2.015104505s Mar 22 00:09:02.258: INFO: Pod "adopt-release-f4c4g" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 22 00:09:02.771: INFO: Successfully updated pod "adopt-release-f4c4g" STEP: Checking that the Job releases the Pod Mar 22 00:09:02.772: INFO: Waiting up to 15m0s for pod "adopt-release-f4c4g" in namespace "job-9063" to be "released" Mar 22 00:09:02.831: INFO: Pod "adopt-release-f4c4g": Phase="Running", Reason="", readiness=true. Elapsed: 59.778917ms Mar 22 00:09:02.831: INFO: Pod "adopt-release-f4c4g" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:02.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9063" for this suite. • [SLOW TEST:13.850 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":330,"completed":85,"skipped":1379,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SS ------------------------------ [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:02.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:09:03.053: INFO: Got root ca configmap in namespace "svcaccounts-5694" Mar 22 00:09:03.091: INFO: Deleted root ca configmap in namespace "svcaccounts-5694" STEP: waiting for a new root ca configmap created Mar 22 00:09:03.595: INFO: Recreated root ca configmap in namespace "svcaccounts-5694" Mar 22 00:09:03.600: INFO: Updated root ca configmap in namespace "svcaccounts-5694" STEP: waiting for the root ca configmap reconciled Mar 22 00:09:04.104: INFO: Reconciled root ca configmap in namespace "svcaccounts-5694" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:04.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5694" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":330,"completed":86,"skipped":1381,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:04.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:04.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-188" for this suite. STEP: Destroying namespace "nspatchtest-db22b447-d424-46f7-b6e2-d9c0c9bc70dc-3909" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":330,"completed":87,"skipped":1404,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:04.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-xxhz STEP: Creating a pod to test atomic-volume-subpath Mar 22 00:09:05.275: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-xxhz" in namespace "subpath-8779" to be "Succeeded or Failed" Mar 22 00:09:05.278: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.763121ms Mar 22 00:09:07.419: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143518958s Mar 22 00:09:09.424: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 4.149036373s Mar 22 00:09:11.429: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 6.153735777s Mar 22 00:09:13.434: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 8.15895533s Mar 22 00:09:15.438: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 10.163190568s Mar 22 00:09:17.443: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 12.168169397s Mar 22 00:09:19.447: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 14.172407401s Mar 22 00:09:21.451: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 16.175973297s Mar 22 00:09:23.869: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 18.593655435s Mar 22 00:09:25.875: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 20.599666863s Mar 22 00:09:27.880: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Running", Reason="", readiness=true. Elapsed: 22.605063715s Mar 22 00:09:29.886: INFO: Pod "pod-subpath-test-downwardapi-xxhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.61079306s STEP: Saw pod success Mar 22 00:09:29.886: INFO: Pod "pod-subpath-test-downwardapi-xxhz" satisfied condition "Succeeded or Failed" Mar 22 00:09:29.889: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-xxhz container test-container-subpath-downwardapi-xxhz: STEP: delete the pod Mar 22 00:09:29.938: INFO: Waiting for pod pod-subpath-test-downwardapi-xxhz to disappear Mar 22 00:09:30.017: INFO: Pod pod-subpath-test-downwardapi-xxhz no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-xxhz Mar 22 00:09:30.017: INFO: Deleting pod "pod-subpath-test-downwardapi-xxhz" in namespace "subpath-8779" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:09:30.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8779" for this suite. • [SLOW TEST:25.779 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":330,"completed":88,"skipped":1416,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:09:30.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8409.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8409.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8409.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8409.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8409.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.46_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8409.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8409.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8409.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8409.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8409.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8409.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 46.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.46_udp@PTR;check="$$(dig +tcp +noall +answer +search 46.247.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.247.46_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 00:09:36.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.547: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.549: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.552: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.571: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.574: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.577: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.580: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:36.616: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [wheezy_udp@dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:09:41.621: INFO: Unable to read wheezy_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.627: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.629: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.648: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.651: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.654: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.660: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:41.677: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [wheezy_udp@dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:09:46.622: INFO: Unable to read wheezy_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.626: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.629: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.632: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.734: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.737: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.740: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.744: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:46.762: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [wheezy_udp@dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:09:51.622: INFO: Unable to read wheezy_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.625: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.631: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.651: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.656: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.659: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.662: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:51.677: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [wheezy_udp@dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:09:56.621: INFO: Unable to read wheezy_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.624: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.627: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.629: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.651: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.654: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.657: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:09:56.677: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [wheezy_udp@dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:10:01.629: INFO: Unable to read wheezy_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.632: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.634: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.647: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.666: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.669: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.672: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.675: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:01.693: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [wheezy_udp@dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@dns-test-service.dns-8409.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:10:06.658: INFO: Unable to read jessie_udp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:06.661: INFO: Unable to read jessie_tcp@dns-test-service.dns-8409.svc.cluster.local from pod dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d: the server could not find the requested resource (get pods dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d) Mar 22 00:10:06.681: INFO: Lookups using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d failed for: [jessie_udp@dns-test-service.dns-8409.svc.cluster.local jessie_tcp@dns-test-service.dns-8409.svc.cluster.local] Mar 22 00:10:11.702: INFO: DNS probes using dns-8409/dns-test-db1861e1-0ebc-450c-b62a-c2d1f971e17d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:12.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8409" for this suite. • [SLOW TEST:42.652 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":330,"completed":89,"skipped":1442,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:12.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:10:12.889: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 22 00:10:16.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 --namespace=crd-publish-openapi-9836 create -f -' Mar 22 00:10:20.729: INFO: stderr: "" Mar 22 00:10:20.729: INFO: stdout: "e2e-test-crd-publish-openapi-282-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 22 00:10:20.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 --namespace=crd-publish-openapi-9836 delete e2e-test-crd-publish-openapi-282-crds test-cr' Mar 22 00:10:20.905: INFO: stderr: "" Mar 22 00:10:20.905: INFO: stdout: "e2e-test-crd-publish-openapi-282-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 22 00:10:20.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 --namespace=crd-publish-openapi-9836 apply -f -' Mar 22 00:10:21.844: INFO: stderr: "" Mar 22 00:10:21.844: INFO: stdout: "e2e-test-crd-publish-openapi-282-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 22 00:10:21.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 --namespace=crd-publish-openapi-9836 delete e2e-test-crd-publish-openapi-282-crds test-cr' Mar 22 00:10:21.990: INFO: stderr: "" Mar 22 00:10:21.990: INFO: stdout: "e2e-test-crd-publish-openapi-282-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 22 00:10:21.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9836 explain e2e-test-crd-publish-openapi-282-crds' Mar 22 00:10:22.282: INFO: stderr: "" Mar 22 00:10:22.282: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-282-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:25.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9836" for this suite. • [SLOW TEST:13.243 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":330,"completed":90,"skipped":1462,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:26.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:26.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6007" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":330,"completed":91,"skipped":1514,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:26.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 22 00:10:26.704: INFO: The status of Pod pod-update-0eb66f6d-b0e0-4ecb-9191-f3fc072b2c55 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:10:28.712: INFO: The status of Pod pod-update-0eb66f6d-b0e0-4ecb-9191-f3fc072b2c55 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:10:30.749: INFO: The status of Pod pod-update-0eb66f6d-b0e0-4ecb-9191-f3fc072b2c55 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:10:32.740: INFO: The status of Pod pod-update-0eb66f6d-b0e0-4ecb-9191-f3fc072b2c55 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 22 00:10:33.395: INFO: Successfully updated pod "pod-update-0eb66f6d-b0e0-4ecb-9191-f3fc072b2c55" STEP: verifying the updated pod is in kubernetes Mar 22 00:10:33.436: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:33.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3397" for this suite. • [SLOW TEST:7.244 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":330,"completed":92,"skipped":1546,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:33.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 22 00:10:33.600: INFO: Waiting up to 5m0s for pod "pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49" in namespace "emptydir-1068" to be "Succeeded or Failed" Mar 22 00:10:33.646: INFO: Pod "pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49": Phase="Pending", Reason="", readiness=false. Elapsed: 46.163688ms Mar 22 00:10:35.651: INFO: Pod "pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050788138s Mar 22 00:10:37.714: INFO: Pod "pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49": Phase="Running", Reason="", readiness=true. Elapsed: 4.113485291s Mar 22 00:10:40.074: INFO: Pod "pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.473323338s STEP: Saw pod success Mar 22 00:10:40.074: INFO: Pod "pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49" satisfied condition "Succeeded or Failed" Mar 22 00:10:40.077: INFO: Trying to get logs from node latest-worker2 pod pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49 container test-container: STEP: delete the pod Mar 22 00:10:41.013: INFO: Waiting for pod pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49 to disappear Mar 22 00:10:41.073: INFO: Pod pod-a6f70a17-4b15-4ef6-bb5d-9af4dc454e49 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:41.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1068" for this suite. • [SLOW TEST:7.628 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":93,"skipped":1547,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:41.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:42.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4685" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":330,"completed":94,"skipped":1555,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:42.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller Mar 22 00:10:42.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 create -f -' Mar 22 00:10:43.670: INFO: stderr: "" Mar 22 00:10:43.670: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 22 00:10:43.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 22 00:10:43.907: INFO: stderr: "" Mar 22 00:10:43.907: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Mar 22 00:10:48.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 22 00:10:49.288: INFO: stderr: "" Mar 22 00:10:49.289: INFO: stdout: "update-demo-nautilus-bwhkk update-demo-nautilus-jspm8 " Mar 22 00:10:49.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods update-demo-nautilus-bwhkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 22 00:10:49.494: INFO: stderr: "" Mar 22 00:10:49.494: INFO: stdout: "" Mar 22 00:10:49.494: INFO: update-demo-nautilus-bwhkk is created but not running Mar 22 00:10:54.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Mar 22 00:10:54.598: INFO: stderr: "" Mar 22 00:10:54.598: INFO: stdout: "update-demo-nautilus-bwhkk update-demo-nautilus-jspm8 " Mar 22 00:10:54.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods update-demo-nautilus-bwhkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 22 00:10:54.691: INFO: stderr: "" Mar 22 00:10:54.691: INFO: stdout: "true" Mar 22 00:10:54.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods update-demo-nautilus-bwhkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 22 00:10:54.782: INFO: stderr: "" Mar 22 00:10:54.782: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 22 00:10:54.782: INFO: validating pod update-demo-nautilus-bwhkk Mar 22 00:10:54.786: INFO: got data: { "image": "nautilus.jpg" } Mar 22 00:10:54.786: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 00:10:54.786: INFO: update-demo-nautilus-bwhkk is verified up and running Mar 22 00:10:54.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods update-demo-nautilus-jspm8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Mar 22 00:10:54.898: INFO: stderr: "" Mar 22 00:10:54.899: INFO: stdout: "true" Mar 22 00:10:54.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods update-demo-nautilus-jspm8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Mar 22 00:10:55.007: INFO: stderr: "" Mar 22 00:10:55.007: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Mar 22 00:10:55.007: INFO: validating pod update-demo-nautilus-jspm8 Mar 22 00:10:55.012: INFO: got data: { "image": "nautilus.jpg" } Mar 22 00:10:55.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 22 00:10:55.012: INFO: update-demo-nautilus-jspm8 is verified up and running STEP: using delete to clean up resources Mar 22 00:10:55.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 delete --grace-period=0 --force -f -' Mar 22 00:10:55.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 00:10:55.211: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 22 00:10:55.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get rc,svc -l name=update-demo --no-headers' Mar 22 00:10:55.328: INFO: stderr: "No resources found in kubectl-8346 namespace.\n" Mar 22 00:10:55.328: INFO: stdout: "" Mar 22 00:10:55.328: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 00:10:55.428: INFO: stderr: "" Mar 22 00:10:55.428: INFO: stdout: "update-demo-nautilus-bwhkk\nupdate-demo-nautilus-jspm8\n" Mar 22 00:10:55.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get rc,svc -l name=update-demo --no-headers' Mar 22 00:10:56.164: INFO: stderr: "No resources found in kubectl-8346 namespace.\n" Mar 22 00:10:56.164: INFO: stdout: "" Mar 22 00:10:56.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8346 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 00:10:56.282: INFO: stderr: "" Mar 22 00:10:56.283: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:10:56.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8346" for this suite. • [SLOW TEST:13.816 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":330,"completed":95,"skipped":1568,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:10:56.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 22 00:10:56.534: INFO: The status of Pod labelsupdate3588e973-cfe6-4e85-ae66-75732b54e985 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:10:58.540: INFO: The status of Pod labelsupdate3588e973-cfe6-4e85-ae66-75732b54e985 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:11:00.539: INFO: The status of Pod labelsupdate3588e973-cfe6-4e85-ae66-75732b54e985 is Running (Ready = true) Mar 22 00:11:01.065: INFO: Successfully updated pod "labelsupdate3588e973-cfe6-4e85-ae66-75732b54e985" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:11:05.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7126" for this suite. • [SLOW TEST:8.887 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":330,"completed":96,"skipped":1607,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:11:05.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:11:05.289: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2471 I0322 00:11:05.313466 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2471, replica count: 1 I0322 00:11:06.364396 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:11:07.364609 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:11:08.365231 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:11:09.365605 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 00:11:09.522: INFO: Created: latency-svc-hkn66 Mar 22 00:11:09.532: INFO: Got endpoints: latency-svc-hkn66 [65.739393ms] Mar 22 00:11:09.601: INFO: Created: latency-svc-6sjsd Mar 22 00:11:09.654: INFO: Got endpoints: latency-svc-6sjsd [121.750095ms] Mar 22 00:11:09.674: INFO: Created: latency-svc-xlhph Mar 22 00:11:09.691: INFO: Got endpoints: latency-svc-xlhph [159.109245ms] Mar 22 00:11:09.786: INFO: Created: latency-svc-nx4gz Mar 22 00:11:09.802: INFO: Got endpoints: latency-svc-nx4gz [270.191988ms] Mar 22 00:11:09.835: INFO: Created: latency-svc-htg26 Mar 22 00:11:09.882: INFO: Got endpoints: latency-svc-htg26 [350.47198ms] Mar 22 00:11:09.954: INFO: Created: latency-svc-tt4ht Mar 22 00:11:09.977: INFO: Got endpoints: latency-svc-tt4ht [444.566485ms] Mar 22 00:11:10.024: INFO: Created: latency-svc-bk2n7 Mar 22 00:11:10.041: INFO: Got endpoints: latency-svc-bk2n7 [509.612305ms] Mar 22 00:11:10.079: INFO: Created: latency-svc-gp5cb Mar 22 00:11:10.090: INFO: Got endpoints: latency-svc-gp5cb [557.926406ms] Mar 22 00:11:10.117: INFO: Created: latency-svc-pxrth Mar 22 00:11:10.135: INFO: Got endpoints: latency-svc-pxrth [603.011536ms] Mar 22 00:11:10.158: INFO: Created: latency-svc-76qmt Mar 22 00:11:10.229: INFO: Got endpoints: latency-svc-76qmt [696.9954ms] Mar 22 00:11:10.242: INFO: Created: latency-svc-rm4mr Mar 22 00:11:10.262: INFO: Got endpoints: latency-svc-rm4mr [729.749069ms] Mar 22 00:11:10.321: INFO: Created: latency-svc-9n46d Mar 22 00:11:10.348: INFO: Got endpoints: latency-svc-9n46d [815.975434ms] Mar 22 00:11:10.391: INFO: Created: latency-svc-hdrt6 Mar 22 00:11:10.405: INFO: Got endpoints: latency-svc-hdrt6 [872.661271ms] Mar 22 00:11:10.432: INFO: Created: latency-svc-f4244 Mar 22 00:11:10.446: INFO: Got endpoints: latency-svc-f4244 [914.457942ms] Mar 22 00:11:10.493: INFO: Created: latency-svc-6zhct Mar 22 00:11:10.530: INFO: Got endpoints: latency-svc-6zhct [998.423582ms] Mar 22 00:11:10.558: INFO: Created: latency-svc-4rm72 Mar 22 00:11:10.575: INFO: Got endpoints: latency-svc-4rm72 [1.043104866s] Mar 22 00:11:10.637: INFO: Created: latency-svc-zwnkv Mar 22 00:11:10.662: INFO: Got endpoints: latency-svc-zwnkv [1.008455749s] Mar 22 00:11:10.663: INFO: Created: latency-svc-rg84z Mar 22 00:11:10.692: INFO: Got endpoints: latency-svc-rg84z [1.001371652s] Mar 22 00:11:10.805: INFO: Created: latency-svc-kdpxf Mar 22 00:11:10.839: INFO: Got endpoints: latency-svc-kdpxf [1.036151087s] Mar 22 00:11:10.865: INFO: Created: latency-svc-rphkm Mar 22 00:11:10.899: INFO: Got endpoints: latency-svc-rphkm [1.016597415s] Mar 22 00:11:10.939: INFO: Created: latency-svc-pnt8k Mar 22 00:11:10.961: INFO: Got endpoints: latency-svc-pnt8k [984.833046ms] Mar 22 00:11:10.993: INFO: Created: latency-svc-fpsx7 Mar 22 00:11:11.031: INFO: Got endpoints: latency-svc-fpsx7 [990.02029ms] Mar 22 00:11:11.056: INFO: Created: latency-svc-w2knr Mar 22 00:11:11.076: INFO: Got endpoints: latency-svc-w2knr [985.960318ms] Mar 22 00:11:11.175: INFO: Created: latency-svc-9xdnq Mar 22 00:11:11.190: INFO: Got endpoints: latency-svc-9xdnq [1.054942898s] Mar 22 00:11:11.191: INFO: Created: latency-svc-bd479 Mar 22 00:11:11.220: INFO: Got endpoints: latency-svc-bd479 [991.18887ms] Mar 22 00:11:11.254: INFO: Created: latency-svc-4pxj7 Mar 22 00:11:11.313: INFO: Got endpoints: latency-svc-4pxj7 [1.051007463s] Mar 22 00:11:11.332: INFO: Created: latency-svc-nfqvq Mar 22 00:11:11.348: INFO: Got endpoints: latency-svc-nfqvq [999.430565ms] Mar 22 00:11:11.383: INFO: Created: latency-svc-2j9z8 Mar 22 00:11:11.402: INFO: Got endpoints: latency-svc-2j9z8 [996.992748ms] Mar 22 00:11:11.501: INFO: Created: latency-svc-kskpr Mar 22 00:11:11.515: INFO: Got endpoints: latency-svc-kskpr [1.068547502s] Mar 22 00:11:11.581: INFO: Created: latency-svc-qfggv Mar 22 00:11:11.594: INFO: Got endpoints: latency-svc-qfggv [1.063051189s] Mar 22 00:11:11.734: INFO: Created: latency-svc-c489l Mar 22 00:11:11.813: INFO: Got endpoints: latency-svc-c489l [1.238329771s] Mar 22 00:11:11.904: INFO: Created: latency-svc-2l4jl Mar 22 00:11:11.959: INFO: Got endpoints: latency-svc-2l4jl [1.296454937s] Mar 22 00:11:12.074: INFO: Created: latency-svc-9m9bb Mar 22 00:11:12.104: INFO: Got endpoints: latency-svc-9m9bb [1.410924234s] Mar 22 00:11:12.104: INFO: Created: latency-svc-79v7w Mar 22 00:11:12.145: INFO: Got endpoints: latency-svc-79v7w [1.306642171s] Mar 22 00:11:12.241: INFO: Created: latency-svc-8fln2 Mar 22 00:11:12.262: INFO: Got endpoints: latency-svc-8fln2 [1.362456066s] Mar 22 00:11:12.314: INFO: Created: latency-svc-6vvnj Mar 22 00:11:12.327: INFO: Got endpoints: latency-svc-6vvnj [1.36551432s] Mar 22 00:11:12.397: INFO: Created: latency-svc-jhpgz Mar 22 00:11:12.451: INFO: Got endpoints: latency-svc-jhpgz [1.419732407s] Mar 22 00:11:12.453: INFO: Created: latency-svc-lh85d Mar 22 00:11:12.473: INFO: Got endpoints: latency-svc-lh85d [1.397234498s] Mar 22 00:11:12.602: INFO: Created: latency-svc-vlfdk Mar 22 00:11:12.623: INFO: Got endpoints: latency-svc-vlfdk [1.433381101s] Mar 22 00:11:12.768: INFO: Created: latency-svc-fhmps Mar 22 00:11:12.779: INFO: Got endpoints: latency-svc-fhmps [1.558588058s] Mar 22 00:11:12.845: INFO: Created: latency-svc-mv9wj Mar 22 00:11:12.872: INFO: Got endpoints: latency-svc-mv9wj [1.559095547s] Mar 22 00:11:12.972: INFO: Created: latency-svc-5c2zb Mar 22 00:11:13.009: INFO: Got endpoints: latency-svc-5c2zb [1.660941219s] Mar 22 00:11:13.009: INFO: Created: latency-svc-t6xjp Mar 22 00:11:13.052: INFO: Got endpoints: latency-svc-t6xjp [1.65037829s] Mar 22 00:11:13.142: INFO: Created: latency-svc-vq622 Mar 22 00:11:13.173: INFO: Got endpoints: latency-svc-vq622 [1.658625133s] Mar 22 00:11:13.253: INFO: Created: latency-svc-fprcg Mar 22 00:11:13.256: INFO: Got endpoints: latency-svc-fprcg [1.662706318s] Mar 22 00:11:13.286: INFO: Created: latency-svc-nk8zv Mar 22 00:11:13.304: INFO: Got endpoints: latency-svc-nk8zv [1.490262243s] Mar 22 00:11:13.328: INFO: Created: latency-svc-8wrpt Mar 22 00:11:13.385: INFO: Got endpoints: latency-svc-8wrpt [1.42639505s] Mar 22 00:11:13.429: INFO: Created: latency-svc-lbt68 Mar 22 00:11:13.441: INFO: Got endpoints: latency-svc-lbt68 [1.337695191s] Mar 22 00:11:13.544: INFO: Created: latency-svc-45jxh Mar 22 00:11:13.558: INFO: Got endpoints: latency-svc-45jxh [1.412568567s] Mar 22 00:11:13.619: INFO: Created: latency-svc-rhp4p Mar 22 00:11:13.672: INFO: Got endpoints: latency-svc-rhp4p [1.409859764s] Mar 22 00:11:13.740: INFO: Created: latency-svc-6tc5l Mar 22 00:11:13.762: INFO: Got endpoints: latency-svc-6tc5l [1.434502364s] Mar 22 00:11:13.809: INFO: Created: latency-svc-6j2nh Mar 22 00:11:13.849: INFO: Got endpoints: latency-svc-6j2nh [1.398012166s] Mar 22 00:11:13.966: INFO: Created: latency-svc-7c6w9 Mar 22 00:11:13.976: INFO: Got endpoints: latency-svc-7c6w9 [1.502112257s] Mar 22 00:11:14.122: INFO: Created: latency-svc-q5wrr Mar 22 00:11:14.172: INFO: Got endpoints: latency-svc-q5wrr [1.548384946s] Mar 22 00:11:14.173: INFO: Created: latency-svc-t75j6 Mar 22 00:11:14.222: INFO: Got endpoints: latency-svc-t75j6 [1.442867862s] Mar 22 00:11:14.282: INFO: Created: latency-svc-jtn5r Mar 22 00:11:14.298: INFO: Got endpoints: latency-svc-jtn5r [1.425557889s] Mar 22 00:11:14.327: INFO: Created: latency-svc-m7fp9 Mar 22 00:11:14.345: INFO: Got endpoints: latency-svc-m7fp9 [1.33673109s] Mar 22 00:11:14.397: INFO: Created: latency-svc-47ttt Mar 22 00:11:14.411: INFO: Got endpoints: latency-svc-47ttt [1.35894283s] Mar 22 00:11:14.443: INFO: Created: latency-svc-khdfl Mar 22 00:11:14.460: INFO: Got endpoints: latency-svc-khdfl [1.286124551s] Mar 22 00:11:14.528: INFO: Created: latency-svc-m97mb Mar 22 00:11:14.561: INFO: Got endpoints: latency-svc-m97mb [1.304637893s] Mar 22 00:11:14.563: INFO: Created: latency-svc-gd9vm Mar 22 00:11:14.592: INFO: Got endpoints: latency-svc-gd9vm [1.288586849s] Mar 22 00:11:14.622: INFO: Created: latency-svc-xz78f Mar 22 00:11:14.666: INFO: Got endpoints: latency-svc-xz78f [1.281060751s] Mar 22 00:11:14.696: INFO: Created: latency-svc-dk5nb Mar 22 00:11:14.720: INFO: Got endpoints: latency-svc-dk5nb [1.278800445s] Mar 22 00:11:14.747: INFO: Created: latency-svc-8j7h6 Mar 22 00:11:14.762: INFO: Got endpoints: latency-svc-8j7h6 [1.204273825s] Mar 22 00:11:14.967: INFO: Created: latency-svc-tgcdc Mar 22 00:11:15.062: INFO: Got endpoints: latency-svc-tgcdc [1.390465785s] Mar 22 00:11:15.265: INFO: Created: latency-svc-qnldh Mar 22 00:11:15.812: INFO: Got endpoints: latency-svc-qnldh [2.050298201s] Mar 22 00:11:16.040: INFO: Created: latency-svc-jqgxk Mar 22 00:11:16.071: INFO: Got endpoints: latency-svc-jqgxk [2.221567359s] Mar 22 00:11:16.169: INFO: Created: latency-svc-6zqcv Mar 22 00:11:16.184: INFO: Got endpoints: latency-svc-6zqcv [2.208623622s] Mar 22 00:11:16.469: INFO: Created: latency-svc-lrfgc Mar 22 00:11:16.478: INFO: Got endpoints: latency-svc-lrfgc [2.305679799s] Mar 22 00:11:16.758: INFO: Created: latency-svc-8rrgs Mar 22 00:11:16.766: INFO: Got endpoints: latency-svc-8rrgs [2.543828971s] Mar 22 00:11:16.790: INFO: Created: latency-svc-54q9t Mar 22 00:11:16.804: INFO: Got endpoints: latency-svc-54q9t [2.506035759s] Mar 22 00:11:16.836: INFO: Created: latency-svc-8n597 Mar 22 00:11:16.905: INFO: Got endpoints: latency-svc-8n597 [2.559913182s] Mar 22 00:11:16.928: INFO: Created: latency-svc-qjfcx Mar 22 00:11:16.970: INFO: Got endpoints: latency-svc-qjfcx [2.559094891s] Mar 22 00:11:17.050: INFO: Created: latency-svc-cvh2b Mar 22 00:11:17.055: INFO: Got endpoints: latency-svc-cvh2b [2.595737999s] Mar 22 00:11:17.078: INFO: Created: latency-svc-x2m2f Mar 22 00:11:17.122: INFO: Got endpoints: latency-svc-x2m2f [2.560889522s] Mar 22 00:11:17.199: INFO: Created: latency-svc-8j6mq Mar 22 00:11:17.226: INFO: Got endpoints: latency-svc-8j6mq [2.633174235s] Mar 22 00:11:17.226: INFO: Created: latency-svc-qsksw Mar 22 00:11:17.252: INFO: Got endpoints: latency-svc-qsksw [2.585597957s] Mar 22 00:11:17.280: INFO: Created: latency-svc-87gmn Mar 22 00:11:17.325: INFO: Got endpoints: latency-svc-87gmn [2.60445151s] Mar 22 00:11:17.342: INFO: Created: latency-svc-nxbmp Mar 22 00:11:17.359: INFO: Got endpoints: latency-svc-nxbmp [2.596963428s] Mar 22 00:11:17.378: INFO: Created: latency-svc-tp2xx Mar 22 00:11:17.395: INFO: Got endpoints: latency-svc-tp2xx [2.33271148s] Mar 22 00:11:17.424: INFO: Created: latency-svc-kr2sj Mar 22 00:11:17.480: INFO: Got endpoints: latency-svc-kr2sj [1.668257325s] Mar 22 00:11:17.502: INFO: Created: latency-svc-r7vzm Mar 22 00:11:17.529: INFO: Got endpoints: latency-svc-r7vzm [1.458226234s] Mar 22 00:11:17.552: INFO: Created: latency-svc-dxp4r Mar 22 00:11:17.624: INFO: Got endpoints: latency-svc-dxp4r [1.439377331s] Mar 22 00:11:17.625: INFO: Created: latency-svc-bt5gz Mar 22 00:11:17.643: INFO: Got endpoints: latency-svc-bt5gz [1.165751401s] Mar 22 00:11:17.660: INFO: Created: latency-svc-9twcc Mar 22 00:11:17.688: INFO: Got endpoints: latency-svc-9twcc [921.734148ms] Mar 22 00:11:17.798: INFO: Created: latency-svc-4hmlp Mar 22 00:11:17.816: INFO: Got endpoints: latency-svc-4hmlp [172.371529ms] Mar 22 00:11:17.858: INFO: Created: latency-svc-d46ww Mar 22 00:11:17.889: INFO: Got endpoints: latency-svc-d46ww [1.08574866s] Mar 22 00:11:17.970: INFO: Created: latency-svc-lpnb5 Mar 22 00:11:17.981: INFO: Got endpoints: latency-svc-lpnb5 [1.075389766s] Mar 22 00:11:18.074: INFO: Created: latency-svc-zft25 Mar 22 00:11:18.083: INFO: Got endpoints: latency-svc-zft25 [1.113005657s] Mar 22 00:11:18.206: INFO: Created: latency-svc-vpmh6 Mar 22 00:11:18.258: INFO: Got endpoints: latency-svc-vpmh6 [1.202933803s] Mar 22 00:11:18.260: INFO: Created: latency-svc-l7xgn Mar 22 00:11:18.343: INFO: Got endpoints: latency-svc-l7xgn [1.221062828s] Mar 22 00:11:18.362: INFO: Created: latency-svc-f2rms Mar 22 00:11:18.370: INFO: Got endpoints: latency-svc-f2rms [1.144934339s] Mar 22 00:11:18.408: INFO: Created: latency-svc-bv9jb Mar 22 00:11:18.415: INFO: Got endpoints: latency-svc-bv9jb [1.162959505s] Mar 22 00:11:18.439: INFO: Created: latency-svc-k97hf Mar 22 00:11:18.498: INFO: Got endpoints: latency-svc-k97hf [1.17369271s] Mar 22 00:11:18.518: INFO: Created: latency-svc-fvhfj Mar 22 00:11:18.542: INFO: Got endpoints: latency-svc-fvhfj [1.1823844s] Mar 22 00:11:18.572: INFO: Created: latency-svc-zn6d8 Mar 22 00:11:18.584: INFO: Got endpoints: latency-svc-zn6d8 [1.188867675s] Mar 22 00:11:18.642: INFO: Created: latency-svc-27qrb Mar 22 00:11:18.708: INFO: Got endpoints: latency-svc-27qrb [1.227875431s] Mar 22 00:11:18.709: INFO: Created: latency-svc-cxv5n Mar 22 00:11:18.782: INFO: Got endpoints: latency-svc-cxv5n [1.252924852s] Mar 22 00:11:18.907: INFO: Created: latency-svc-pnzrz Mar 22 00:11:18.922: INFO: Got endpoints: latency-svc-pnzrz [1.298212554s] Mar 22 00:11:19.040: INFO: Created: latency-svc-rwfq6 Mar 22 00:11:19.072: INFO: Got endpoints: latency-svc-rwfq6 [1.3845224s] Mar 22 00:11:19.426: INFO: Created: latency-svc-4g5tm Mar 22 00:11:19.492: INFO: Got endpoints: latency-svc-4g5tm [1.675993464s] Mar 22 00:11:19.569: INFO: Created: latency-svc-scctl Mar 22 00:11:19.587: INFO: Got endpoints: latency-svc-scctl [1.697039161s] Mar 22 00:11:19.705: INFO: Created: latency-svc-29rtw Mar 22 00:11:19.719: INFO: Got endpoints: latency-svc-29rtw [1.73784983s] Mar 22 00:11:19.746: INFO: Created: latency-svc-wkfjh Mar 22 00:11:19.834: INFO: Got endpoints: latency-svc-wkfjh [1.750291312s] Mar 22 00:11:19.887: INFO: Created: latency-svc-bvf2w Mar 22 00:11:19.908: INFO: Got endpoints: latency-svc-bvf2w [1.649395449s] Mar 22 00:11:19.927: INFO: Created: latency-svc-4cbqv Mar 22 00:11:19.971: INFO: Got endpoints: latency-svc-4cbqv [1.627875646s] Mar 22 00:11:19.986: INFO: Created: latency-svc-tkgjd Mar 22 00:11:20.003: INFO: Got endpoints: latency-svc-tkgjd [1.632234508s] Mar 22 00:11:20.109: INFO: Created: latency-svc-pf8r4 Mar 22 00:11:20.126: INFO: Created: latency-svc-8t6cl Mar 22 00:11:20.127: INFO: Got endpoints: latency-svc-pf8r4 [1.711815801s] Mar 22 00:11:20.154: INFO: Got endpoints: latency-svc-8t6cl [1.655896731s] Mar 22 00:11:20.179: INFO: Created: latency-svc-rl4vk Mar 22 00:11:20.192: INFO: Got endpoints: latency-svc-rl4vk [1.650450296s] Mar 22 00:11:20.271: INFO: Created: latency-svc-bhv9f Mar 22 00:11:20.294: INFO: Got endpoints: latency-svc-bhv9f [1.710013801s] Mar 22 00:11:20.355: INFO: Created: latency-svc-qf2jk Mar 22 00:11:20.370: INFO: Created: latency-svc-4g4c6 Mar 22 00:11:20.372: INFO: Got endpoints: latency-svc-qf2jk [1.66386315s] Mar 22 00:11:20.394: INFO: Got endpoints: latency-svc-4g4c6 [1.611380897s] Mar 22 00:11:20.497: INFO: Created: latency-svc-5pm85 Mar 22 00:11:20.559: INFO: Got endpoints: latency-svc-5pm85 [1.636948295s] Mar 22 00:11:20.888: INFO: Created: latency-svc-szdhw Mar 22 00:11:20.895: INFO: Got endpoints: latency-svc-szdhw [1.822706715s] Mar 22 00:11:21.074: INFO: Created: latency-svc-c2s6w Mar 22 00:11:21.217: INFO: Got endpoints: latency-svc-c2s6w [1.724897035s] Mar 22 00:11:21.412: INFO: Created: latency-svc-cmr7m Mar 22 00:11:21.546: INFO: Got endpoints: latency-svc-cmr7m [1.959360038s] Mar 22 00:11:21.597: INFO: Created: latency-svc-k7vb2 Mar 22 00:11:21.643: INFO: Got endpoints: latency-svc-k7vb2 [1.923997146s] Mar 22 00:11:21.704: INFO: Created: latency-svc-lmw9z Mar 22 00:11:21.730: INFO: Got endpoints: latency-svc-lmw9z [1.8959953s] Mar 22 00:11:21.799: INFO: Created: latency-svc-dh6w6 Mar 22 00:11:21.827: INFO: Got endpoints: latency-svc-dh6w6 [1.919111439s] Mar 22 00:11:21.909: INFO: Created: latency-svc-cd8tc Mar 22 00:11:21.947: INFO: Got endpoints: latency-svc-cd8tc [1.976338069s] Mar 22 00:11:22.000: INFO: Created: latency-svc-fkw57 Mar 22 00:11:22.019: INFO: Got endpoints: latency-svc-fkw57 [2.015844551s] Mar 22 00:11:22.104: INFO: Created: latency-svc-qtbcp Mar 22 00:11:22.109: INFO: Got endpoints: latency-svc-qtbcp [1.981941823s] Mar 22 00:11:22.174: INFO: Created: latency-svc-g4ttk Mar 22 00:11:22.193: INFO: Got endpoints: latency-svc-g4ttk [2.038456153s] Mar 22 00:11:22.229: INFO: Created: latency-svc-pv7rq Mar 22 00:11:22.256: INFO: Got endpoints: latency-svc-pv7rq [2.063646817s] Mar 22 00:11:22.291: INFO: Created: latency-svc-76dqn Mar 22 00:11:22.309: INFO: Got endpoints: latency-svc-76dqn [2.015515579s] Mar 22 00:11:22.367: INFO: Created: latency-svc-x6gwb Mar 22 00:11:22.403: INFO: Got endpoints: latency-svc-x6gwb [2.030080645s] Mar 22 00:11:22.427: INFO: Created: latency-svc-hgpqs Mar 22 00:11:22.443: INFO: Got endpoints: latency-svc-hgpqs [2.049277102s] Mar 22 00:11:22.524: INFO: Created: latency-svc-t552l Mar 22 00:11:22.556: INFO: Created: latency-svc-kc646 Mar 22 00:11:22.556: INFO: Got endpoints: latency-svc-t552l [1.997557245s] Mar 22 00:11:22.585: INFO: Got endpoints: latency-svc-kc646 [1.690488238s] Mar 22 00:11:22.654: INFO: Created: latency-svc-ncgzw Mar 22 00:11:22.677: INFO: Got endpoints: latency-svc-ncgzw [1.46015488s] Mar 22 00:11:22.677: INFO: Created: latency-svc-6k8c7 Mar 22 00:11:22.715: INFO: Got endpoints: latency-svc-6k8c7 [1.169430922s] Mar 22 00:11:22.754: INFO: Created: latency-svc-nqbgb Mar 22 00:11:22.805: INFO: Got endpoints: latency-svc-nqbgb [1.161594871s] Mar 22 00:11:22.834: INFO: Created: latency-svc-dpff6 Mar 22 00:11:22.851: INFO: Got endpoints: latency-svc-dpff6 [1.121698713s] Mar 22 00:11:22.876: INFO: Created: latency-svc-lkgmp Mar 22 00:11:22.894: INFO: Got endpoints: latency-svc-lkgmp [1.066666535s] Mar 22 00:11:22.945: INFO: Created: latency-svc-hlb8w Mar 22 00:11:22.974: INFO: Got endpoints: latency-svc-hlb8w [1.026159587s] Mar 22 00:11:22.993: INFO: Created: latency-svc-klr8x Mar 22 00:11:23.067: INFO: Got endpoints: latency-svc-klr8x [1.04807803s] Mar 22 00:11:23.080: INFO: Created: latency-svc-5wn54 Mar 22 00:11:23.087: INFO: Got endpoints: latency-svc-5wn54 [978.186448ms] Mar 22 00:11:23.206: INFO: Created: latency-svc-vbght Mar 22 00:11:23.223: INFO: Got endpoints: latency-svc-vbght [1.029809154s] Mar 22 00:11:23.224: INFO: Created: latency-svc-svqhq Mar 22 00:11:23.260: INFO: Got endpoints: latency-svc-svqhq [1.003891684s] Mar 22 00:11:23.274: INFO: Created: latency-svc-88cm6 Mar 22 00:11:23.291: INFO: Got endpoints: latency-svc-88cm6 [981.496755ms] Mar 22 00:11:23.330: INFO: Created: latency-svc-vzssq Mar 22 00:11:23.341: INFO: Got endpoints: latency-svc-vzssq [938.476394ms] Mar 22 00:11:23.365: INFO: Created: latency-svc-669fv Mar 22 00:11:23.378: INFO: Got endpoints: latency-svc-669fv [935.127155ms] Mar 22 00:11:23.430: INFO: Created: latency-svc-npbst Mar 22 00:11:23.462: INFO: Got endpoints: latency-svc-npbst [905.789463ms] Mar 22 00:11:23.506: INFO: Created: latency-svc-ggx4j Mar 22 00:11:23.523: INFO: Got endpoints: latency-svc-ggx4j [937.074366ms] Mar 22 00:11:23.588: INFO: Created: latency-svc-wklxv Mar 22 00:11:23.643: INFO: Got endpoints: latency-svc-wklxv [966.098009ms] Mar 22 00:11:23.715: INFO: Created: latency-svc-db6xw Mar 22 00:11:23.755: INFO: Created: latency-svc-8zd9g Mar 22 00:11:23.756: INFO: Got endpoints: latency-svc-db6xw [1.040090492s] Mar 22 00:11:23.765: INFO: Got endpoints: latency-svc-8zd9g [959.913499ms] Mar 22 00:11:23.810: INFO: Created: latency-svc-fhjf8 Mar 22 00:11:23.857: INFO: Got endpoints: latency-svc-fhjf8 [1.005788894s] Mar 22 00:11:23.902: INFO: Created: latency-svc-v7qx2 Mar 22 00:11:23.924: INFO: Created: latency-svc-qfjjw Mar 22 00:11:23.924: INFO: Got endpoints: latency-svc-v7qx2 [1.029909881s] Mar 22 00:11:24.007: INFO: Got endpoints: latency-svc-qfjjw [1.033111574s] Mar 22 00:11:24.025: INFO: Created: latency-svc-pjgmw Mar 22 00:11:24.040: INFO: Got endpoints: latency-svc-pjgmw [973.334709ms] Mar 22 00:11:24.166: INFO: Created: latency-svc-kmp6t Mar 22 00:11:24.175: INFO: Got endpoints: latency-svc-kmp6t [1.087727793s] Mar 22 00:11:24.238: INFO: Created: latency-svc-nd477 Mar 22 00:11:24.253: INFO: Got endpoints: latency-svc-nd477 [1.02988653s] Mar 22 00:11:24.314: INFO: Created: latency-svc-fxqnw Mar 22 00:11:24.319: INFO: Got endpoints: latency-svc-fxqnw [1.058803881s] Mar 22 00:11:24.357: INFO: Created: latency-svc-2bk5f Mar 22 00:11:24.373: INFO: Got endpoints: latency-svc-2bk5f [1.081454497s] Mar 22 00:11:24.410: INFO: Created: latency-svc-ggjql Mar 22 00:11:24.458: INFO: Got endpoints: latency-svc-ggjql [1.116501602s] Mar 22 00:11:24.490: INFO: Created: latency-svc-c8ltt Mar 22 00:11:24.505: INFO: Got endpoints: latency-svc-c8ltt [1.12626816s] Mar 22 00:11:24.589: INFO: Created: latency-svc-hlcgq Mar 22 00:11:24.603: INFO: Got endpoints: latency-svc-hlcgq [1.140913663s] Mar 22 00:11:24.667: INFO: Created: latency-svc-6s86d Mar 22 00:11:24.687: INFO: Got endpoints: latency-svc-6s86d [1.164491708s] Mar 22 00:11:24.795: INFO: Created: latency-svc-7jwbb Mar 22 00:11:24.822: INFO: Got endpoints: latency-svc-7jwbb [1.179102674s] Mar 22 00:11:24.882: INFO: Created: latency-svc-vzd2h Mar 22 00:11:24.891: INFO: Got endpoints: latency-svc-vzd2h [1.13509603s] Mar 22 00:11:24.920: INFO: Created: latency-svc-nkpph Mar 22 00:11:24.957: INFO: Got endpoints: latency-svc-nkpph [1.192702332s] Mar 22 00:11:25.026: INFO: Created: latency-svc-v94pk Mar 22 00:11:25.075: INFO: Created: latency-svc-22h46 Mar 22 00:11:25.075: INFO: Got endpoints: latency-svc-v94pk [1.217948793s] Mar 22 00:11:25.157: INFO: Got endpoints: latency-svc-22h46 [1.233463214s] Mar 22 00:11:25.191: INFO: Created: latency-svc-pkc97 Mar 22 00:11:25.211: INFO: Got endpoints: latency-svc-pkc97 [1.204552987s] Mar 22 00:11:25.233: INFO: Created: latency-svc-7bkmj Mar 22 00:11:25.313: INFO: Got endpoints: latency-svc-7bkmj [1.272920538s] Mar 22 00:11:25.334: INFO: Created: latency-svc-qr79r Mar 22 00:11:25.362: INFO: Got endpoints: latency-svc-qr79r [1.187106182s] Mar 22 00:11:25.381: INFO: Created: latency-svc-ngq2k Mar 22 00:11:25.391: INFO: Got endpoints: latency-svc-ngq2k [1.138121502s] Mar 22 00:11:25.439: INFO: Created: latency-svc-mzksl Mar 22 00:11:25.473: INFO: Got endpoints: latency-svc-mzksl [1.154723697s] Mar 22 00:11:25.474: INFO: Created: latency-svc-lhlvl Mar 22 00:11:25.514: INFO: Got endpoints: latency-svc-lhlvl [1.140893992s] Mar 22 00:11:25.604: INFO: Created: latency-svc-vs4kq Mar 22 00:11:25.629: INFO: Got endpoints: latency-svc-vs4kq [1.1714547s] Mar 22 00:11:25.665: INFO: Created: latency-svc-pzcnd Mar 22 00:11:25.714: INFO: Got endpoints: latency-svc-pzcnd [1.209451448s] Mar 22 00:11:25.741: INFO: Created: latency-svc-4f455 Mar 22 00:11:25.772: INFO: Got endpoints: latency-svc-4f455 [1.168386072s] Mar 22 00:11:25.882: INFO: Created: latency-svc-8b7kr Mar 22 00:11:25.898: INFO: Got endpoints: latency-svc-8b7kr [1.210516304s] Mar 22 00:11:25.929: INFO: Created: latency-svc-pblkz Mar 22 00:11:25.975: INFO: Got endpoints: latency-svc-pblkz [1.15230397s] Mar 22 00:11:26.036: INFO: Created: latency-svc-n4sx4 Mar 22 00:11:26.044: INFO: Got endpoints: latency-svc-n4sx4 [1.153083861s] Mar 22 00:11:26.097: INFO: Created: latency-svc-kf6zg Mar 22 00:11:26.134: INFO: Got endpoints: latency-svc-kf6zg [1.176398888s] Mar 22 00:11:26.197: INFO: Created: latency-svc-hpmq5 Mar 22 00:11:26.212: INFO: Got endpoints: latency-svc-hpmq5 [1.136453004s] Mar 22 00:11:26.271: INFO: Created: latency-svc-qxnjv Mar 22 00:11:26.295: INFO: Got endpoints: latency-svc-qxnjv [1.137627383s] Mar 22 00:11:26.296: INFO: Created: latency-svc-zcnhz Mar 22 00:11:26.337: INFO: Got endpoints: latency-svc-zcnhz [1.12594886s] Mar 22 00:11:26.434: INFO: Created: latency-svc-krlh6 Mar 22 00:11:26.455: INFO: Got endpoints: latency-svc-krlh6 [1.14121164s] Mar 22 00:11:26.497: INFO: Created: latency-svc-7phvm Mar 22 00:11:26.529: INFO: Got endpoints: latency-svc-7phvm [1.167342154s] Mar 22 00:11:26.577: INFO: Created: latency-svc-85fv7 Mar 22 00:11:26.601: INFO: Got endpoints: latency-svc-85fv7 [1.210023365s] Mar 22 00:11:26.629: INFO: Created: latency-svc-9c5ch Mar 22 00:11:26.658: INFO: Got endpoints: latency-svc-9c5ch [1.184742903s] Mar 22 00:11:26.726: INFO: Created: latency-svc-nz8pw Mar 22 00:11:26.735: INFO: Got endpoints: latency-svc-nz8pw [1.221810511s] Mar 22 00:11:26.871: INFO: Created: latency-svc-vpnjg Mar 22 00:11:26.874: INFO: Got endpoints: latency-svc-vpnjg [1.245052204s] Mar 22 00:11:26.905: INFO: Created: latency-svc-zd7kt Mar 22 00:11:26.919: INFO: Got endpoints: latency-svc-zd7kt [1.204307741s] Mar 22 00:11:27.002: INFO: Created: latency-svc-d2n2j Mar 22 00:11:27.027: INFO: Created: latency-svc-7knwd Mar 22 00:11:27.028: INFO: Got endpoints: latency-svc-d2n2j [1.255909242s] Mar 22 00:11:27.057: INFO: Got endpoints: latency-svc-7knwd [1.15933174s] Mar 22 00:11:27.090: INFO: Created: latency-svc-tj2hp Mar 22 00:11:27.129: INFO: Got endpoints: latency-svc-tj2hp [1.154359354s] Mar 22 00:11:27.181: INFO: Created: latency-svc-kghcf Mar 22 00:11:27.213: INFO: Got endpoints: latency-svc-kghcf [1.16885889s] Mar 22 00:11:27.295: INFO: Created: latency-svc-mjjpb Mar 22 00:11:27.319: INFO: Created: latency-svc-dl7gh Mar 22 00:11:27.320: INFO: Got endpoints: latency-svc-mjjpb [1.185713592s] Mar 22 00:11:27.348: INFO: Got endpoints: latency-svc-dl7gh [1.136274161s] Mar 22 00:11:27.392: INFO: Created: latency-svc-d7gfr Mar 22 00:11:27.445: INFO: Got endpoints: latency-svc-d7gfr [1.149748476s] Mar 22 00:11:27.484: INFO: Created: latency-svc-wpf5s Mar 22 00:11:27.507: INFO: Got endpoints: latency-svc-wpf5s [1.169351041s] Mar 22 00:11:27.595: INFO: Created: latency-svc-kqdg4 Mar 22 00:11:27.624: INFO: Got endpoints: latency-svc-kqdg4 [1.169537008s] Mar 22 00:11:27.675: INFO: Created: latency-svc-cbw4n Mar 22 00:11:27.722: INFO: Got endpoints: latency-svc-cbw4n [1.192834514s] Mar 22 00:11:27.758: INFO: Created: latency-svc-hzwlz Mar 22 00:11:27.793: INFO: Got endpoints: latency-svc-hzwlz [1.191391613s] Mar 22 00:11:27.859: INFO: Created: latency-svc-dqf5m Mar 22 00:11:27.885: INFO: Got endpoints: latency-svc-dqf5m [1.226329109s] Mar 22 00:11:27.984: INFO: Created: latency-svc-sm6jk Mar 22 00:11:27.998: INFO: Got endpoints: latency-svc-sm6jk [1.262583983s] Mar 22 00:11:27.998: INFO: Latencies: [121.750095ms 159.109245ms 172.371529ms 270.191988ms 350.47198ms 444.566485ms 509.612305ms 557.926406ms 603.011536ms 696.9954ms 729.749069ms 815.975434ms 872.661271ms 905.789463ms 914.457942ms 921.734148ms 935.127155ms 937.074366ms 938.476394ms 959.913499ms 966.098009ms 973.334709ms 978.186448ms 981.496755ms 984.833046ms 985.960318ms 990.02029ms 991.18887ms 996.992748ms 998.423582ms 999.430565ms 1.001371652s 1.003891684s 1.005788894s 1.008455749s 1.016597415s 1.026159587s 1.029809154s 1.02988653s 1.029909881s 1.033111574s 1.036151087s 1.040090492s 1.043104866s 1.04807803s 1.051007463s 1.054942898s 1.058803881s 1.063051189s 1.066666535s 1.068547502s 1.075389766s 1.081454497s 1.08574866s 1.087727793s 1.113005657s 1.116501602s 1.121698713s 1.12594886s 1.12626816s 1.13509603s 1.136274161s 1.136453004s 1.137627383s 1.138121502s 1.140893992s 1.140913663s 1.14121164s 1.144934339s 1.149748476s 1.15230397s 1.153083861s 1.154359354s 1.154723697s 1.15933174s 1.161594871s 1.162959505s 1.164491708s 1.165751401s 1.167342154s 1.168386072s 1.16885889s 1.169351041s 1.169430922s 1.169537008s 1.1714547s 1.17369271s 1.176398888s 1.179102674s 1.1823844s 1.184742903s 1.185713592s 1.187106182s 1.188867675s 1.191391613s 1.192702332s 1.192834514s 1.202933803s 1.204273825s 1.204307741s 1.204552987s 1.209451448s 1.210023365s 1.210516304s 1.217948793s 1.221062828s 1.221810511s 1.226329109s 1.227875431s 1.233463214s 1.238329771s 1.245052204s 1.252924852s 1.255909242s 1.262583983s 1.272920538s 1.278800445s 1.281060751s 1.286124551s 1.288586849s 1.296454937s 1.298212554s 1.304637893s 1.306642171s 1.33673109s 1.337695191s 1.35894283s 1.362456066s 1.36551432s 1.3845224s 1.390465785s 1.397234498s 1.398012166s 1.409859764s 1.410924234s 1.412568567s 1.419732407s 1.425557889s 1.42639505s 1.433381101s 1.434502364s 1.439377331s 1.442867862s 1.458226234s 1.46015488s 1.490262243s 1.502112257s 1.548384946s 1.558588058s 1.559095547s 1.611380897s 1.627875646s 1.632234508s 1.636948295s 1.649395449s 1.65037829s 1.650450296s 1.655896731s 1.658625133s 1.660941219s 1.662706318s 1.66386315s 1.668257325s 1.675993464s 1.690488238s 1.697039161s 1.710013801s 1.711815801s 1.724897035s 1.73784983s 1.750291312s 1.822706715s 1.8959953s 1.919111439s 1.923997146s 1.959360038s 1.976338069s 1.981941823s 1.997557245s 2.015515579s 2.015844551s 2.030080645s 2.038456153s 2.049277102s 2.050298201s 2.063646817s 2.208623622s 2.221567359s 2.305679799s 2.33271148s 2.506035759s 2.543828971s 2.559094891s 2.559913182s 2.560889522s 2.585597957s 2.595737999s 2.596963428s 2.60445151s 2.633174235s] Mar 22 00:11:27.998: INFO: 50 %ile: 1.204552987s Mar 22 00:11:27.998: INFO: 90 %ile: 2.015844551s Mar 22 00:11:27.998: INFO: 99 %ile: 2.60445151s Mar 22 00:11:27.998: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:11:27.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2471" for this suite. • [SLOW TEST:22.960 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":330,"completed":97,"skipped":1611,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:11:28.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes Mar 22 00:11:28.356: INFO: The status of Pod pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:11:30.800: INFO: The status of Pod pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:11:32.420: INFO: The status of Pod pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:11:34.389: INFO: The status of Pod pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 22 00:11:35.102: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06" Mar 22 00:11:35.102: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06" in namespace "pods-8307" to be "terminated due to deadline exceeded" Mar 22 00:11:35.126: INFO: Pod "pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06": Phase="Running", Reason="", readiness=true. Elapsed: 24.120741ms Mar 22 00:11:37.380: INFO: Pod "pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.27846858s Mar 22 00:11:37.380: INFO: Pod "pod-update-activedeadlineseconds-e12336c9-8621-44bf-9701-882482937f06" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:11:37.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8307" for this suite. • [SLOW TEST:10.138 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":330,"completed":98,"skipped":1619,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:11:38.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:11:39.117: INFO: Creating deployment "test-recreate-deployment" Mar 22 00:11:39.142: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 22 00:11:39.637: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 22 00:11:41.770: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 22 00:11:41.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-546b5fd69c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:11:43.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968699, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-546b5fd69c\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:11:46.109: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 22 00:11:46.129: INFO: Updating deployment test-recreate-deployment Mar 22 00:11:46.129: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 22 00:11:47.212: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2197 ca425a55-c321-475b-bc79-7f5eddfaa36f 6990025 2 2021-03-22 00:11:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-22 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-22 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00374d038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-22 00:11:46 +0000 UTC,LastTransitionTime:2021-03-22 00:11:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2021-03-22 00:11:46 +0000 UTC,LastTransitionTime:2021-03-22 00:11:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 22 00:11:47.410: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-2197 25ff4b86-fe92-4f9e-ab2d-1f8c2a2fa4db 6990022 1 2021-03-22 00:11:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ca425a55-c321-475b-bc79-7f5eddfaa36f 0xc00374d490 0xc00374d491}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca425a55-c321-475b-bc79-7f5eddfaa36f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00374d508 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:11:47.410: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 22 00:11:47.410: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-546b5fd69c deployment-2197 181af8e6-c57e-4e8c-a6ce-bae7b011316d 6990003 2 2021-03-22 00:11:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:546b5fd69c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ca425a55-c321-475b-bc79-7f5eddfaa36f 0xc00374d397 0xc00374d398}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ca425a55-c321-475b-bc79-7f5eddfaa36f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 546b5fd69c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:546b5fd69c] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00374d428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:11:47.424: INFO: Pod "test-recreate-deployment-85d47dcb4-rgmv9" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-rgmv9 test-recreate-deployment-85d47dcb4- deployment-2197 49709d79-2d60-4a87-90e5-ed0c676bc23f 6990027 0 2021-03-22 00:11:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 25ff4b86-fe92-4f9e-ab2d-1f8c2a2fa4db 0xc00374d940 0xc00374d941}] [] [{kube-controller-manager Update v1 2021-03-22 00:11:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25ff4b86-fe92-4f9e-ab2d-1f8c2a2fa4db\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:11:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s8bqv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s8bqv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s8bqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:11:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:11:47.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2197" for this suite. • [SLOW TEST:9.180 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":330,"completed":99,"skipped":1661,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:11:47.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:11:47.549: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 22 00:11:47.690: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 22 00:11:52.712: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 22 00:11:52.712: INFO: Creating deployment "test-rolling-update-deployment" Mar 22 00:11:52.726: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 22 00:11:52.767: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 22 00:11:54.797: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 22 00:11:54.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968713, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968713, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968713, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968712, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-65dc7745\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:11:57.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968713, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968713, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968713, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968712, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-65dc7745\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:11:59.404: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 22 00:11:59.559: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5440 6200015e-f716-4386-9c21-24527ab6db40 6990370 1 2021-03-22 00:11:52 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-03-22 00:11:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-22 00:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003673ee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-22 00:11:53 +0000 UTC,LastTransitionTime:2021-03-22 00:11:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-65dc7745" has successfully progressed.,LastUpdateTime:2021-03-22 00:11:58 +0000 UTC,LastTransitionTime:2021-03-22 00:11:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 22 00:11:59.612: INFO: New ReplicaSet "test-rolling-update-deployment-65dc7745" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-65dc7745 deployment-5440 7b4bdbbe-7a45-421e-a86b-095dd9ff16fe 6990351 1 2021-03-22 00:11:52 +0000 UTC map[name:sample-pod pod-template-hash:65dc7745] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 6200015e-f716-4386-9c21-24527ab6db40 0xc00420a34f 0xc00420a360}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6200015e-f716-4386-9c21-24527ab6db40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 65dc7745,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:65dc7745] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00420a3d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:11:59.612: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 22 00:11:59.612: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5440 e196600d-5c44-4cc2-a503-9766d50573c5 6990367 2 2021-03-22 00:11:47 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 6200015e-f716-4386-9c21-24527ab6db40 0xc00420a257 0xc00420a258}] [] [{e2e.test Update apps/v1 2021-03-22 00:11:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-22 00:11:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6200015e-f716-4386-9c21-24527ab6db40\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00420a2f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:11:59.627: INFO: Pod "test-rolling-update-deployment-65dc7745-72hgd" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-65dc7745-72hgd test-rolling-update-deployment-65dc7745- deployment-5440 f6568eec-209e-483b-94c7-126cb4b71cc3 6990350 0 2021-03-22 00:11:52 +0000 UTC map[name:sample-pod pod-template-hash:65dc7745] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-65dc7745 7b4bdbbe-7a45-421e-a86b-095dd9ff16fe 0xc00390074f 0xc003900760}] [] [{kube-controller-manager Update v1 2021-03-22 00:11:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b4bdbbe-7a45-421e-a86b-095dd9ff16fe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:11:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.48\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-758fc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-758fc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-758fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:11:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.48,StartTime:2021-03-22 00:11:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:11:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://091076c7ac48347db222274893d9a10ae03ab9f2c1bf8405493a6f18684b0d28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:11:59.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5440" for this suite. • [SLOW TEST:12.309 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":330,"completed":100,"skipped":1683,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:11:59.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-da2dde02-3287-49ec-8406-9b95ff27bdb2 STEP: Creating a pod to test consume configMaps Mar 22 00:12:00.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a" in namespace "projected-4184" to be "Succeeded or Failed" Mar 22 00:12:00.229: INFO: Pod "pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 53.770733ms Mar 22 00:12:02.410: INFO: Pod "pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234109967s Mar 22 00:12:04.433: INFO: Pod "pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a": Phase="Running", Reason="", readiness=true. Elapsed: 4.257337641s Mar 22 00:12:06.481: INFO: Pod "pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.305427364s STEP: Saw pod success Mar 22 00:12:06.481: INFO: Pod "pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a" satisfied condition "Succeeded or Failed" Mar 22 00:12:06.507: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a container agnhost-container: STEP: delete the pod Mar 22 00:12:06.639: INFO: Waiting for pod pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a to disappear Mar 22 00:12:06.645: INFO: Pod pod-projected-configmaps-a45bc724-b8ae-478c-8986-14be6b847e1a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:12:06.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4184" for this suite. • [SLOW TEST:6.913 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":101,"skipped":1688,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:12:06.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 00:12:08.322: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 00:12:11.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:12:13.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751968728, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 00:12:16.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 22 00:12:22.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=webhook-8835 attach --namespace=webhook-8835 to-be-attached-pod -i -c=container1' Mar 22 00:12:22.887: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:12:23.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8835" for this suite. STEP: Destroying namespace "webhook-8835-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:17.625 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":330,"completed":102,"skipped":1696,"failed":4,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]"]} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:12:24.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7036 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7036 I0322 00:12:24.925507 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7036, replica count: 2 I0322 00:12:27.976628 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:12:30.977654 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 00:12:30.977: INFO: Creating new exec pod E0322 00:12:35.617522 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:12:37.036219 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:12:38.916454 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:12:44.428674 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:12:52.348619 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:13:17.241077 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:14:07.979523 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 00:14:35.614: FAIL: Unexpected error: <*errors.errorString | 0xc003e4e010>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.14() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 00:14:35.615: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-7036". STEP: Found 14 events. Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:25 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-qx52c Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:25 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-wpxxh Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:25 +0000 UTC - event for externalname-service-qx52c: {default-scheduler } Scheduled: Successfully assigned services-7036/externalname-service-qx52c to latest-worker Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:25 +0000 UTC - event for externalname-service-wpxxh: {default-scheduler } Scheduled: Successfully assigned services-7036/externalname-service-wpxxh to latest-worker2 Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:26 +0000 UTC - event for externalname-service-qx52c: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:27 +0000 UTC - event for externalname-service-wpxxh: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:28 +0000 UTC - event for externalname-service-qx52c: {kubelet latest-worker} Created: Created container externalname-service Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:28 +0000 UTC - event for externalname-service-qx52c: {kubelet latest-worker} Started: Started container externalname-service Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:28 +0000 UTC - event for externalname-service-wpxxh: {kubelet latest-worker2} Created: Created container externalname-service Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:29 +0000 UTC - event for externalname-service-wpxxh: {kubelet latest-worker2} Started: Started container externalname-service Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:31 +0000 UTC - event for execpodnr8ls: {default-scheduler } Scheduled: Successfully assigned services-7036/execpodnr8ls to latest-worker2 Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:33 +0000 UTC - event for execpodnr8ls: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:34 +0000 UTC - event for execpodnr8ls: {kubelet latest-worker2} Started: Started container agnhost-container Mar 22 00:14:36.097: INFO: At 2021-03-22 00:12:34 +0000 UTC - event for execpodnr8ls: {kubelet latest-worker2} Created: Created container agnhost-container Mar 22 00:14:36.102: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:14:36.102: INFO: execpodnr8ls latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:31 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:31 +0000 UTC }] Mar 22 00:14:36.102: INFO: externalname-service-qx52c latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:25 +0000 UTC }] Mar 22 00:14:36.102: INFO: externalname-service-wpxxh latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:12:25 +0000 UTC }] Mar 22 00:14:36.102: INFO: Mar 22 00:14:36.111: INFO: Logging node info for node latest-control-plane Mar 22 00:14:36.132: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6991775 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:14:35 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:14:35 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:14:35 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:14:35 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:14:36.133: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:14:36.139: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:14:36.160: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:14:36.160: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:14:36.160: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:14:36.160: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:14:36.160: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:14:36.160: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container coredns ready: true, restart count 0 Mar 22 00:14:36.160: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container etcd ready: true, restart count 0 Mar 22 00:14:36.160: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:14:36.160: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.160: INFO: Container coredns ready: true, restart count 0 W0322 00:14:36.166926 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:14:36.360: INFO: Latency metrics for node latest-control-plane Mar 22 00:14:36.360: INFO: Logging node info for node latest-worker Mar 22 00:14:36.365: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6991553 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:13:25 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:13:25 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:13:25 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:13:25 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:14:36.366: INFO: Logging kubelet events for node latest-worker Mar 22 00:14:36.374: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:14:36.398: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.398: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:14:36.398: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.398: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:14:36.398: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.398: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:14:36.398: INFO: externalname-service-qx52c started at 2021-03-22 00:12:25 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.398: INFO: Container externalname-service ready: true, restart count 0 Mar 22 00:14:36.398: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.398: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:14:36.405587 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:14:36.641: INFO: Latency metrics for node latest-worker Mar 22 00:14:36.641: INFO: Logging node info for node latest-worker2 Mar 22 00:14:36.645: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6991776 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:13:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:13:35 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:13:35 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:13:35 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:13:35 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:14:36.645: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:14:36.652: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:14:36.670: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:14:36.670: INFO: execpodnr8ls started at 2021-03-22 00:12:31 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:14:36.670: INFO: csi-mockplugin-0 started at 2021-03-22 00:12:50 +0000 UTC (0+3 container statuses recorded) Mar 22 00:14:36.670: INFO: Container csi-provisioner ready: true, restart count 0 Mar 22 00:14:36.670: INFO: Container driver-registrar ready: true, restart count 0 Mar 22 00:14:36.670: INFO: Container mock ready: true, restart count 0 Mar 22 00:14:36.670: INFO: csi-mockplugin-attacher-0 started at 2021-03-22 00:12:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container csi-attacher ready: true, restart count 0 Mar 22 00:14:36.670: INFO: back-off-cap started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container back-off-cap ready: false, restart count 6 Mar 22 00:14:36.670: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:14:36.670: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:14:36.670: INFO: externalname-service-wpxxh started at 2021-03-22 00:12:25 +0000 UTC (0+1 container statuses recorded) Mar 22 00:14:36.670: INFO: Container externalname-service ready: true, restart count 0 W0322 00:14:36.676431 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:14:36.924: INFO: Latency metrics for node latest-worker2 Mar 22 00:14:36.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7036" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [132.624 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:14:35.614: Unexpected error: <*errors.errorString | 0xc003e4e010>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":330,"completed":102,"skipped":1697,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:14:36.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-122d5a88-0ca5-482c-bd67-13315fd30412 in namespace container-probe-4032 Mar 22 00:14:43.254: INFO: Started pod liveness-122d5a88-0ca5-482c-bd67-13315fd30412 in namespace container-probe-4032 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:14:43.257: INFO: Initial restart count of pod liveness-122d5a88-0ca5-482c-bd67-13315fd30412 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:18:45.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4032" for this suite. • [SLOW TEST:248.200 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":330,"completed":103,"skipped":1702,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:18:45.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 22 00:18:45.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-3421 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' Mar 22 00:18:45.554: INFO: stderr: "" Mar 22 00:18:45.554: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Mar 22 00:18:45.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-3421 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29"}]}} --dry-run=server' Mar 22 00:18:46.102: INFO: stderr: "" Mar 22 00:18:46.102: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 22 00:18:46.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-3421 delete pods e2e-test-httpd-pod' Mar 22 00:19:25.019: INFO: stderr: "" Mar 22 00:19:25.019: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:19:25.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3421" for this suite. • [SLOW TEST:39.950 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":330,"completed":104,"skipped":1712,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:19:25.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 22 00:19:25.306: INFO: Waiting up to 5m0s for pod "pod-cf6d6173-0283-491a-87d8-32a9196fd9fa" in namespace "emptydir-8765" to be "Succeeded or Failed" Mar 22 00:19:25.320: INFO: Pod "pod-cf6d6173-0283-491a-87d8-32a9196fd9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 14.223458ms Mar 22 00:19:27.326: INFO: Pod "pod-cf6d6173-0283-491a-87d8-32a9196fd9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019966441s Mar 22 00:19:29.330: INFO: Pod "pod-cf6d6173-0283-491a-87d8-32a9196fd9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023937156s Mar 22 00:19:31.334: INFO: Pod "pod-cf6d6173-0283-491a-87d8-32a9196fd9fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028423493s STEP: Saw pod success Mar 22 00:19:31.335: INFO: Pod "pod-cf6d6173-0283-491a-87d8-32a9196fd9fa" satisfied condition "Succeeded or Failed" Mar 22 00:19:31.337: INFO: Trying to get logs from node latest-worker pod pod-cf6d6173-0283-491a-87d8-32a9196fd9fa container test-container: STEP: delete the pod Mar 22 00:19:31.405: INFO: Waiting for pod pod-cf6d6173-0283-491a-87d8-32a9196fd9fa to disappear Mar 22 00:19:31.422: INFO: Pod pod-cf6d6173-0283-491a-87d8-32a9196fd9fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:19:31.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8765" for this suite. • [SLOW TEST:6.378 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":105,"skipped":1717,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:19:31.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:19:42.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5003" for this suite. • [SLOW TEST:11.230 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":330,"completed":106,"skipped":1727,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:19:42.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 00:19:43.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 00:19:45.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:19:47.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969183, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 00:19:50.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:19:50.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9759" for this suite. STEP: Destroying namespace "webhook-9759-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.133 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":330,"completed":107,"skipped":1727,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:19:50.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Mar 22 00:19:51.004: INFO: namespace kubectl-4986 Mar 22 00:19:51.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4986 create -f -' Mar 22 00:19:51.565: INFO: stderr: "" Mar 22 00:19:51.565: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 22 00:19:52.609: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 00:19:52.609: INFO: Found 0 / 1 Mar 22 00:19:53.570: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 00:19:53.570: INFO: Found 0 / 1 Mar 22 00:19:54.575: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 00:19:54.575: INFO: Found 0 / 1 Mar 22 00:19:55.569: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 00:19:55.569: INFO: Found 1 / 1 Mar 22 00:19:55.569: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 22 00:19:55.572: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 00:19:55.572: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 22 00:19:55.572: INFO: wait on agnhost-primary startup in kubectl-4986 Mar 22 00:19:55.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4986 logs agnhost-primary-msrnd agnhost-primary' Mar 22 00:19:55.758: INFO: stderr: "" Mar 22 00:19:55.758: INFO: stdout: "Paused\n" STEP: exposing RC Mar 22 00:19:55.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4986 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Mar 22 00:19:55.893: INFO: stderr: "" Mar 22 00:19:55.893: INFO: stdout: "service/rm2 exposed\n" Mar 22 00:19:55.950: INFO: Service rm2 in namespace kubectl-4986 found. STEP: exposing service Mar 22 00:19:57.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4986 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Mar 22 00:19:58.139: INFO: stderr: "" Mar 22 00:19:58.139: INFO: stdout: "service/rm3 exposed\n" Mar 22 00:19:58.171: INFO: Service rm3 in namespace kubectl-4986 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:00.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4986" for this suite. • [SLOW TEST:9.362 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":330,"completed":108,"skipped":1753,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:00.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 22 00:20:00.339: INFO: The status of Pod annotationupdate9b661946-2b4e-4d6b-ba1d-d77dd8ae3a86 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:20:02.344: INFO: The status of Pod annotationupdate9b661946-2b4e-4d6b-ba1d-d77dd8ae3a86 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:20:04.344: INFO: The status of Pod annotationupdate9b661946-2b4e-4d6b-ba1d-d77dd8ae3a86 is Running (Ready = true) Mar 22 00:20:04.867: INFO: Successfully updated pod "annotationupdate9b661946-2b4e-4d6b-ba1d-d77dd8ae3a86" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:08.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3714" for this suite. • [SLOW TEST:8.736 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":330,"completed":109,"skipped":1775,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:08.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions Mar 22 00:20:08.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-589 api-versions' Mar 22 00:20:09.210: INFO: stderr: "" Mar 22 00:20:09.210: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:20:09.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-589" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":330,"completed":110,"skipped":1776,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:20:09.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-829d455d-a5fc-43ec-97e4-911b434970ae in namespace container-probe-4587 Mar 22 00:20:13.405: INFO: Started pod busybox-829d455d-a5fc-43ec-97e4-911b434970ae in namespace container-probe-4587 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:20:13.409: INFO: Initial restart count of pod busybox-829d455d-a5fc-43ec-97e4-911b434970ae is 0 Mar 22 00:21:02.152: INFO: Restart count of pod container-probe-4587/busybox-829d455d-a5fc-43ec-97e4-911b434970ae is now 1 (48.743713473s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:21:02.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4587" for this suite. • [SLOW TEST:53.078 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":330,"completed":111,"skipped":1808,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:21:02.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 00:21:02.855: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 00:21:04.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:21:06.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969262, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 00:21:09.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:21:09.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1897-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:21:11.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6333" for this suite. STEP: Destroying namespace "webhook-6333-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.941 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":330,"completed":112,"skipped":1842,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:21:11.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 22 00:21:11.326: INFO: PodSpec: initContainers in spec.initContainers Mar 22 00:22:08.880: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d74f7fad-749d-48a4-bf67-3b025ed853b0", GenerateName:"", Namespace:"init-container-4349", SelfLink:"", UID:"e489cc59-3e8e-4360-b629-0c99a0f93f6f", ResourceVersion:"6994164", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63751969271, loc:(*time.Location)(0x99208a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"326902977"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003a01638), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003a01650)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003a01668), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003a01680)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-x2gkf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00444acc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-x2gkf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-x2gkf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-x2gkf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003cda208), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc004aaa150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003cda2b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003cda2d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003cda2d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003cda2dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003e4ed00), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969271, loc:(*time.Location)(0x99208a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969271, loc:(*time.Location)(0x99208a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969271, loc:(*time.Location)(0x99208a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751969271, loc:(*time.Location)(0x99208a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.9", PodIP:"10.244.2.71", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.71"}}, StartTime:(*v1.Time)(0xc003a01698), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003a016b0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc004aaa230)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7", ContainerID:"containerd://e51a54863d0530720b73c4ad92382ffb1c000443b1ee3c30cca6611f9ee51b8c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005170340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005170320), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003cda35f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:08.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4349" for this suite. • [SLOW TEST:57.888 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":330,"completed":113,"skipped":1857,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:09.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 22 00:22:09.317: INFO: Waiting up to 5m0s for pod "pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2" in namespace "emptydir-8438" to be "Succeeded or Failed" Mar 22 00:22:09.349: INFO: Pod "pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 32.114371ms Mar 22 00:22:11.363: INFO: Pod "pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046062335s Mar 22 00:22:13.369: INFO: Pod "pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051419678s STEP: Saw pod success Mar 22 00:22:13.369: INFO: Pod "pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2" satisfied condition "Succeeded or Failed" Mar 22 00:22:13.371: INFO: Trying to get logs from node latest-worker pod pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2 container test-container: STEP: delete the pod Mar 22 00:22:13.469: INFO: Waiting for pod pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2 to disappear Mar 22 00:22:13.481: INFO: Pod pod-ed6729fd-bb5d-42c9-881c-f67393e2d3c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:13.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8438" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":114,"skipped":1864,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:13.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 22 00:22:13.614: INFO: Waiting up to 5m0s for pod "downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a" in namespace "downward-api-2224" to be "Succeeded or Failed" Mar 22 00:22:13.647: INFO: Pod "downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.852487ms Mar 22 00:22:15.652: INFO: Pod "downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037847967s Mar 22 00:22:17.656: INFO: Pod "downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a": Phase="Running", Reason="", readiness=true. Elapsed: 4.042150279s Mar 22 00:22:19.660: INFO: Pod "downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046157354s STEP: Saw pod success Mar 22 00:22:19.660: INFO: Pod "downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a" satisfied condition "Succeeded or Failed" Mar 22 00:22:19.663: INFO: Trying to get logs from node latest-worker pod downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a container dapi-container: STEP: delete the pod Mar 22 00:22:19.703: INFO: Waiting for pod downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a to disappear Mar 22 00:22:19.724: INFO: Pod downward-api-dce13e68-1433-4e58-94b3-1a30147c0a2a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:19.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2224" for this suite. • [SLOW TEST:6.242 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":330,"completed":115,"skipped":1880,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:19.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 22 00:22:19.902: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:22:21.906: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:22:23.907: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 22 00:22:23.946: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:22:25.950: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:22:27.952: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 22 00:22:27.983: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:27.993: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:29.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:30.018: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:31.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:31.998: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:33.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:33.998: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:35.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:35.997: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:37.994: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:37.999: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:39.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:39.996: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:41.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:41.997: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:43.994: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:43.999: INFO: Pod pod-with-poststart-http-hook still exists Mar 22 00:22:45.994: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 22 00:22:45.999: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:45.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6892" for this suite. • [SLOW TEST:26.277 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":330,"completed":116,"skipped":1887,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} S ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:46.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:49.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7923" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":330,"completed":117,"skipped":1888,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:49.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:22:49.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f" in namespace "downward-api-8291" to be "Succeeded or Failed" Mar 22 00:22:49.933: INFO: Pod "downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.780938ms Mar 22 00:22:51.981: INFO: Pod "downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05567817s Mar 22 00:22:54.012: INFO: Pod "downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087164781s STEP: Saw pod success Mar 22 00:22:54.012: INFO: Pod "downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f" satisfied condition "Succeeded or Failed" Mar 22 00:22:54.015: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f container client-container: STEP: delete the pod Mar 22 00:22:54.035: INFO: Waiting for pod downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f to disappear Mar 22 00:22:54.055: INFO: Pod downwardapi-volume-8fd6df64-7486-47e7-9064-13b399f3ae1f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:22:54.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8291" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":118,"skipped":1894,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:22:54.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Mar 22 00:22:54.285: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.285: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.440: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.440: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.491: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.491: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.563: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:54.563: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 and labels map[test-deployment-static:true] Mar 22 00:22:58.072: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment-static:true] Mar 22 00:22:58.072: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment-static:true] Mar 22 00:22:59.066: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Mar 22 00:22:59.078: INFO: observed event type ADDED STEP: waiting for Replicas to scale Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 0 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.081: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.149: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.149: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.237: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.237: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.372: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.372: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 2 Mar 22 00:22:59.386: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 STEP: listing Deployments Mar 22 00:22:59.577: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Mar 22 00:22:59.629: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Mar 22 00:22:59.762: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:22:59.862: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:22:59.915: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:23:00.474: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:23:00.972: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:23:01.016: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:23:01.779: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Mar 22 00:23:02.094: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Mar 22 00:23:06.685: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.685: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.685: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.686: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.686: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.686: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.686: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 Mar 22 00:23:06.686: INFO: observed Deployment test-deployment in namespace deployment-6504 with ReadyReplicas 1 STEP: deleting the Deployment Mar 22 00:23:07.060: INFO: observed event type MODIFIED Mar 22 00:23:07.060: INFO: observed event type MODIFIED Mar 22 00:23:07.060: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED Mar 22 00:23:07.061: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 22 00:23:07.123: INFO: Log out all the ReplicaSets if there is no deployment created Mar 22 00:23:07.127: INFO: ReplicaSet "test-deployment-76bffdfd4b": &ReplicaSet{ObjectMeta:{test-deployment-76bffdfd4b deployment-6504 c5d40442-44b8-4e4c-8419-dc297299a73f 6994702 4 2021-03-22 00:22:59 +0000 UTC map[pod-template-hash:76bffdfd4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 8623ef09-2e2a-4a63-8c99-3f9cd31a8754 0xc0034d2b47 0xc0034d2b48}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:23:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8623ef09-2e2a-4a63-8c99-3f9cd31a8754\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 76bffdfd4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:76bffdfd4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.4.1 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034d2bc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:23:07.131: INFO: ReplicaSet "test-deployment-7778d6bf57": &ReplicaSet{ObjectMeta:{test-deployment-7778d6bf57 deployment-6504 01de9173-6595-4f9b-a308-b8840758e96b 6994614 2 2021-03-22 00:22:54 +0000 UTC map[pod-template-hash:7778d6bf57 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 8623ef09-2e2a-4a63-8c99-3f9cd31a8754 0xc0034d2c37 0xc0034d2c38}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:22:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8623ef09-2e2a-4a63-8c99-3f9cd31a8754\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7778d6bf57,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7778d6bf57 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034d2ca0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:23:07.169: INFO: pod: "test-deployment-7778d6bf57-hmn8k": &Pod{ObjectMeta:{test-deployment-7778d6bf57-hmn8k test-deployment-7778d6bf57- deployment-6504 3848547b-a974-4aa4-9c97-c7216c4a2f56 6994577 0 2021-03-22 00:22:54 +0000 UTC map[pod-template-hash:7778d6bf57 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-7778d6bf57 01de9173-6595-4f9b-a308-b8840758e96b 0xc00329b207 0xc00329b208}] [] [{kube-controller-manager Update v1 2021-03-22 00:22:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01de9173-6595-4f9b-a308-b8840758e96b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:22:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-65lpz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-65lpz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-65lpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:22:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:22:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:22:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:22:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.77,StartTime:2021-03-22 00:22:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:22:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://11c046eae669b713332f90a116c445c650eb3e8f95d404c545097ba4cb7af32f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:23:07.169: INFO: ReplicaSet "test-deployment-85d87c6f4b": &ReplicaSet{ObjectMeta:{test-deployment-85d87c6f4b deployment-6504 102552e2-7c6a-4416-b760-86b9b8f72ff3 6994703 3 2021-03-22 00:22:59 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 8623ef09-2e2a-4a63-8c99-3f9cd31a8754 0xc0034d2d07 0xc0034d2d08}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:23:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8623ef09-2e2a-4a63-8c99-3f9cd31a8754\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 85d87c6f4b,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034d2d70 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:23:07.174: INFO: pod: "test-deployment-85d87c6f4b-7766l": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-7766l test-deployment-85d87c6f4b- deployment-6504 efa8925e-f482-41a0-a62a-548ded96e237 6994706 0 2021-03-22 00:23:06 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 102552e2-7c6a-4416-b760-86b9b8f72ff3 0xc00329bc17 0xc00329bc18}] [] [{kube-controller-manager Update v1 2021-03-22 00:23:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"102552e2-7c6a-4416-b760-86b9b8f72ff3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:23:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-65lpz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-65lpz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-65lpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:23:07.174: INFO: pod: "test-deployment-85d87c6f4b-npt2b": &Pod{ObjectMeta:{test-deployment-85d87c6f4b-npt2b test-deployment-85d87c6f4b- deployment-6504 74a1f614-bb5e-40a1-9775-4f07d12854d6 6994681 0 2021-03-22 00:23:00 +0000 UTC map[pod-template-hash:85d87c6f4b test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-85d87c6f4b 102552e2-7c6a-4416-b760-86b9b8f72ff3 0xc00329bdb7 0xc00329bdb8}] [] [{kube-controller-manager Update v1 2021-03-22 00:23:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"102552e2-7c6a-4416-b760-86b9b8f72ff3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:23:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-65lpz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-65lpz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-65lpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:23:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.80,StartTime:2021-03-22 00:23:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:23:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://32570846f4fde0728fb4493921dc49b7da7053b35445377b262d26e712779db3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:23:07.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6504" for this suite. • [SLOW TEST:13.119 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":330,"completed":119,"skipped":1894,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:23:07.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0322 00:23:08.846108 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:24:10.866: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:24:10.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9186" for this suite. • [SLOW TEST:63.693 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":330,"completed":120,"skipped":1929,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:24:10.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:24:11.010: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:24:12.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9818" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":330,"completed":121,"skipped":1930,"failed":5,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:24:12.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2511 STEP: creating service affinity-nodeport-transition in namespace services-2511 STEP: creating replication controller affinity-nodeport-transition in namespace services-2511 I0322 00:24:12.273067 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2511, replica count: 3 I0322 00:24:15.323891 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:24:18.324757 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 00:24:18.333: INFO: Creating new exec pod E0322 00:24:22.385250 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:24:23.614582 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:24:25.568062 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:24:30.156430 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:24:41.156614 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:24:59.736896 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:25:31.162331 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 00:26:22.384: FAIL: Unexpected error: <*errors.errorString | 0xc00271c060>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000f59760, 0x73e8b88, 0xc003d3f4a0, 0xc0002a7680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2518 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 00:26:22.385: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2511, will wait for the garbage collector to delete the pods Mar 22 00:26:22.511: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.621259ms Mar 22 00:26:23.113: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 601.069299ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2511". STEP: Found 23 events. Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:12 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-dh2xp Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:12 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-bpzqq Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:12 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-w8n6g Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:12 +0000 UTC - event for affinity-nodeport-transition-bpzqq: {default-scheduler } Scheduled: Successfully assigned services-2511/affinity-nodeport-transition-bpzqq to latest-worker2 Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:12 +0000 UTC - event for affinity-nodeport-transition-dh2xp: {default-scheduler } Scheduled: Successfully assigned services-2511/affinity-nodeport-transition-dh2xp to latest-worker2 Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:12 +0000 UTC - event for affinity-nodeport-transition-w8n6g: {default-scheduler } Scheduled: Successfully assigned services-2511/affinity-nodeport-transition-w8n6g to latest-worker2 Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:14 +0000 UTC - event for affinity-nodeport-transition-dh2xp: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:14 +0000 UTC - event for affinity-nodeport-transition-w8n6g: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:15 +0000 UTC - event for affinity-nodeport-transition-bpzqq: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:16 +0000 UTC - event for affinity-nodeport-transition-dh2xp: {kubelet latest-worker2} Started: Started container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:16 +0000 UTC - event for affinity-nodeport-transition-dh2xp: {kubelet latest-worker2} Created: Created container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:16 +0000 UTC - event for affinity-nodeport-transition-w8n6g: {kubelet latest-worker2} Created: Created container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:16 +0000 UTC - event for affinity-nodeport-transition-w8n6g: {kubelet latest-worker2} Started: Started container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:17 +0000 UTC - event for affinity-nodeport-transition-bpzqq: {kubelet latest-worker2} Created: Created container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:17 +0000 UTC - event for affinity-nodeport-transition-bpzqq: {kubelet latest-worker2} Started: Started container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:18 +0000 UTC - event for execpod-affinitysdklw: {default-scheduler } Scheduled: Successfully assigned services-2511/execpod-affinitysdklw to latest-worker Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:19 +0000 UTC - event for execpod-affinitysdklw: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:20 +0000 UTC - event for execpod-affinitysdklw: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 00:27:25.476: INFO: At 2021-03-22 00:24:21 +0000 UTC - event for execpod-affinitysdklw: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 00:27:25.476: INFO: At 2021-03-22 00:26:22 +0000 UTC - event for execpod-affinitysdklw: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 22 00:27:25.476: INFO: At 2021-03-22 00:26:23 +0000 UTC - event for affinity-nodeport-transition-bpzqq: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:26:23 +0000 UTC - event for affinity-nodeport-transition-dh2xp: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-transition Mar 22 00:27:25.476: INFO: At 2021-03-22 00:26:23 +0000 UTC - event for affinity-nodeport-transition-w8n6g: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-transition Mar 22 00:27:25.510: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:27:25.510: INFO: Mar 22 00:27:25.603: INFO: Logging node info for node latest-control-plane Mar 22 00:27:25.627: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6995324 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:24:36 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:24:36 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:24:36 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:24:36 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:27:25.627: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:27:25.654: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:27:25.718: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container etcd ready: true, restart count 0 Mar 22 00:27:25.718: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:27:25.718: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container coredns ready: true, restart count 0 Mar 22 00:27:25.718: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:27:25.718: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:27:25.718: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:27:25.718: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:27:25.718: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container coredns ready: true, restart count 0 Mar 22 00:27:25.718: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:25.718: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:27:25.761193 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:27:25.922: INFO: Latency metrics for node latest-control-plane Mar 22 00:27:25.923: INFO: Logging node info for node latest-worker Mar 22 00:27:25.930: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6995433 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:23:26 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:23:26 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:23:26 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:23:26 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:27:25.931: INFO: Logging kubelet events for node latest-worker Mar 22 00:27:26.014: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:27:26.077: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:27:26.077: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:27:26.077: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:27:26.077: INFO: pod-adfa6ce1-643b-41b3-935e-e8a9f3bd96c4 started at 2021-03-22 00:27:11 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container write-pod ready: false, restart count 0 Mar 22 00:27:26.077: INFO: hostexec-latest-worker-4scdz started at 2021-03-22 00:26:57 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:27:26.077: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:27:26.077: INFO: pod-00b21cdc-9021-42e8-9d06-d2013323464d started at 2021-03-22 00:27:16 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.077: INFO: Container write-pod ready: false, restart count 0 W0322 00:27:26.141586 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:27:26.401: INFO: Latency metrics for node latest-worker Mar 22 00:27:26.401: INFO: Logging node info for node latest-worker2 Mar 22 00:27:26.540: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6995246 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:19:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:19:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:27:26.541: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:27:26.554: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:27:26.594: INFO: hostexec-latest-worker2-hkjg4 started at 2021-03-22 00:27:20 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.594: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:27:26.594: INFO: csi-mockplugin-attacher-0 started at 2021-03-22 00:27:26 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.594: INFO: Container csi-attacher ready: false, restart count 0 Mar 22 00:27:26.594: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.594: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:27:26.594: INFO: back-off-cap started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.594: INFO: Container back-off-cap ready: false, restart count 9 Mar 22 00:27:26.594: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.594: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:27:26.594: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:27:26.594: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:27:26.594: INFO: csi-mockplugin-0 started at 2021-03-22 00:27:26 +0000 UTC (0+3 container statuses recorded) Mar 22 00:27:26.594: INFO: Container csi-provisioner ready: false, restart count 0 Mar 22 00:27:26.594: INFO: Container driver-registrar ready: false, restart count 0 Mar 22 00:27:26.594: INFO: Container mock ready: false, restart count 0 W0322 00:27:26.602522 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:27:26.850: INFO: Latency metrics for node latest-worker2 Mar 22 00:27:26.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2511" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [194.875 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:26:22.384: Unexpected error: <*errors.errorString | 0xc00271c060>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-transition within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":330,"completed":121,"skipped":1957,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:27:26.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-fae3c728-fbcc-483c-959a-c8f1a490a381 in namespace container-probe-9562 Mar 22 00:27:34.392: INFO: Started pod test-webserver-fae3c728-fbcc-483c-959a-c8f1a490a381 in namespace container-probe-9562 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:27:34.472: INFO: Initial restart count of pod test-webserver-fae3c728-fbcc-483c-959a-c8f1a490a381 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:31:36.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9562" for this suite. • [SLOW TEST:249.523 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":330,"completed":122,"skipped":1970,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:31:36.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-741e5bf8-1da7-46d3-9610-b47fb4fab538 STEP: Creating configMap with name cm-test-opt-upd-b7eed123-814a-45d8-ad8b-edb6522f7db0 STEP: Creating the pod Mar 22 00:31:37.120: INFO: The status of Pod pod-configmaps-9a405e52-83f5-4726-9fbc-132e5d56e980 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:31:39.165: INFO: The status of Pod pod-configmaps-9a405e52-83f5-4726-9fbc-132e5d56e980 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:31:41.125: INFO: The status of Pod pod-configmaps-9a405e52-83f5-4726-9fbc-132e5d56e980 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:31:43.183: INFO: The status of Pod pod-configmaps-9a405e52-83f5-4726-9fbc-132e5d56e980 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:31:45.126: INFO: The status of Pod pod-configmaps-9a405e52-83f5-4726-9fbc-132e5d56e980 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-741e5bf8-1da7-46d3-9610-b47fb4fab538 STEP: Updating configmap cm-test-opt-upd-b7eed123-814a-45d8-ad8b-edb6522f7db0 STEP: Creating configMap with name cm-test-opt-create-f07f062d-518d-4e07-8ce8-3541ed3498c6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:33:03.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4300" for this suite. • [SLOW TEST:87.266 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":123,"skipped":1973,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:33:03.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:33:03.893: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:33:05.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8746" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":330,"completed":124,"skipped":2005,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:33:05.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:33:11.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5957" for this suite. STEP: Destroying namespace "nsdeletetest-4038" for this suite. Mar 22 00:33:11.719: INFO: Namespace nsdeletetest-4038 was already deleted STEP: Destroying namespace "nsdeletetest-9311" for this suite. • [SLOW TEST:6.580 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":330,"completed":125,"skipped":2024,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:33:11.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:33:11.818: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075" in namespace "projected-6191" to be "Succeeded or Failed" Mar 22 00:33:11.933: INFO: Pod "downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075": Phase="Pending", Reason="", readiness=false. Elapsed: 114.864253ms Mar 22 00:33:14.292: INFO: Pod "downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474616536s Mar 22 00:33:16.296: INFO: Pod "downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075": Phase="Running", Reason="", readiness=true. Elapsed: 4.478603879s Mar 22 00:33:18.302: INFO: Pod "downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484126959s STEP: Saw pod success Mar 22 00:33:18.302: INFO: Pod "downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075" satisfied condition "Succeeded or Failed" Mar 22 00:33:18.305: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075 container client-container: STEP: delete the pod Mar 22 00:33:18.375: INFO: Waiting for pod downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075 to disappear Mar 22 00:33:18.378: INFO: Pod downwardapi-volume-0d303b56-660d-45d8-a03c-10a65d480075 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:33:18.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6191" for this suite. • [SLOW TEST:6.663 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":126,"skipped":2088,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:33:18.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Mar 22 00:33:18.505: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:20.509: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:22.529: INFO: The status of Pod pod1 is Running (Ready = false) Mar 22 00:33:24.510: INFO: The status of Pod pod1 is Running (Ready = false) Mar 22 00:33:26.511: INFO: The status of Pod pod1 is Running (Ready = false) Mar 22 00:33:28.511: INFO: The status of Pod pod1 is Running (Ready = false) Mar 22 00:33:30.510: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.9 on the node which pod1 resides and expect scheduled Mar 22 00:33:30.544: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:32.622: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:34.549: INFO: The status of Pod pod2 is Running (Ready = false) Mar 22 00:33:36.550: INFO: The status of Pod pod2 is Running (Ready = false) Mar 22 00:33:38.559: INFO: The status of Pod pod2 is Running (Ready = false) Mar 22 00:33:40.565: INFO: The status of Pod pod2 is Running (Ready = false) Mar 22 00:33:42.549: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.9 but use UDP protocol on the node which pod2 resides Mar 22 00:33:42.576: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:44.581: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:46.582: INFO: The status of Pod pod3 is Running (Ready = false) Mar 22 00:33:48.581: INFO: The status of Pod pod3 is Running (Ready = false) Mar 22 00:33:50.580: INFO: The status of Pod pod3 is Running (Ready = false) Mar 22 00:33:52.581: INFO: The status of Pod pod3 is Running (Ready = false) Mar 22 00:33:54.581: INFO: The status of Pod pod3 is Running (Ready = true) Mar 22 00:33:54.601: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:56.606: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:33:58.606: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Mar 22 00:33:58.609: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.9 http://127.0.0.1:54323/hostname] Namespace:hostport-1795 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:33:58.609: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.9, port: 54323 Mar 22 00:33:58.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.9:54323/hostname] Namespace:hostport-1795 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:33:58.739: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.9, port: 54323 UDP Mar 22 00:33:58.849: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.9 54323] Namespace:hostport-1795 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:33:58.849: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:03.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-1795" for this suite. • [SLOW TEST:45.574 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":330,"completed":127,"skipped":2113,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:03.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Mar 22 00:34:04.073: INFO: observed Pod pod-test in namespace pods-8101 in phase Pending with labels: map[test-pod-static:true] & conditions [] Mar 22 00:34:04.155: INFO: observed Pod pod-test in namespace pods-8101 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC }] Mar 22 00:34:04.187: INFO: observed Pod pod-test in namespace pods-8101 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC }] Mar 22 00:34:07.551: INFO: Found Pod pod-test in namespace pods-8101 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:34:04 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Mar 22 00:34:07.574: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Mar 22 00:34:07.641: INFO: observed event type ADDED Mar 22 00:34:07.641: INFO: observed event type MODIFIED Mar 22 00:34:07.642: INFO: observed event type MODIFIED Mar 22 00:34:07.642: INFO: observed event type MODIFIED Mar 22 00:34:07.642: INFO: observed event type MODIFIED Mar 22 00:34:07.642: INFO: observed event type MODIFIED Mar 22 00:34:07.642: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:07.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8101" for this suite. •{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":330,"completed":128,"skipped":2133,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:07.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-6761acf5-3c0f-4a95-a592-ef911aad03a2 STEP: Creating a pod to test consume configMaps Mar 22 00:34:08.191: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49" in namespace "projected-6514" to be "Succeeded or Failed" Mar 22 00:34:08.207: INFO: Pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 15.755875ms Mar 22 00:34:10.418: INFO: Pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227315594s Mar 22 00:34:12.476: INFO: Pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285132266s Mar 22 00:34:14.481: INFO: Pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29018715s Mar 22 00:34:16.486: INFO: Pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.294655692s STEP: Saw pod success Mar 22 00:34:16.486: INFO: Pod "pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49" satisfied condition "Succeeded or Failed" Mar 22 00:34:16.489: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49 container agnhost-container: STEP: delete the pod Mar 22 00:34:16.547: INFO: Waiting for pod pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49 to disappear Mar 22 00:34:16.571: INFO: Pod pod-projected-configmaps-0663a53c-014b-4ebc-83fb-f075caaf7b49 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:16.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6514" for this suite. • [SLOW TEST:8.863 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":330,"completed":129,"skipped":2164,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:16.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:34:16.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f" in namespace "downward-api-1980" to be "Succeeded or Failed" Mar 22 00:34:16.757: INFO: Pod "downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.020865ms Mar 22 00:34:18.813: INFO: Pod "downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094480554s Mar 22 00:34:20.818: INFO: Pod "downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098498136s STEP: Saw pod success Mar 22 00:34:20.818: INFO: Pod "downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f" satisfied condition "Succeeded or Failed" Mar 22 00:34:20.820: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f container client-container: STEP: delete the pod Mar 22 00:34:21.000: INFO: Waiting for pod downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f to disappear Mar 22 00:34:21.014: INFO: Pod downwardapi-volume-d1b0ae25-2b9f-4106-89ab-b288c1abc85f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:21.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1980" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":330,"completed":130,"skipped":2175,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:21.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2281c916-33dd-4cbd-a201-c58a3afac786 STEP: Creating the pod Mar 22 00:34:21.194: INFO: The status of Pod pod-projected-configmaps-612853cc-6498-47f5-a0e6-222f2574a58b is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:34:23.199: INFO: The status of Pod pod-projected-configmaps-612853cc-6498-47f5-a0e6-222f2574a58b is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:34:25.202: INFO: The status of Pod pod-projected-configmaps-612853cc-6498-47f5-a0e6-222f2574a58b is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-2281c916-33dd-4cbd-a201-c58a3afac786 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:29.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6774" for this suite. • [SLOW TEST:8.231 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":131,"skipped":2209,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:29.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info Mar 22 00:34:29.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-4338 cluster-info' Mar 22 00:34:38.161: INFO: stderr: "" Mar 22 00:34:38.161: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:41865\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:38.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4338" for this suite. • [SLOW TEST:8.917 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1058 should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":330,"completed":132,"skipped":2211,"failed":6,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:38.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:46 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:34:38.239: FAIL: error creating EndpointSlice resource Unexpected error: <*errors.StatusError | 0xc00107f360>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:70 +0x2bb k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslice-4065". STEP: Found 0 events. Mar 22 00:34:38.264: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:34:38.265: INFO: Mar 22 00:34:38.269: INFO: Logging node info for node latest-control-plane Mar 22 00:34:38.273: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6996676 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:29:37 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:29:37 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:29:37 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:29:37 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:34:38.273: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:34:38.279: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:34:38.308: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:34:38.308: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:34:38.308: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:34:38.308: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:34:38.308: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container coredns ready: true, restart count 0 Mar 22 00:34:38.308: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:34:38.308: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container etcd ready: true, restart count 0 Mar 22 00:34:38.308: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:34:38.308: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.308: INFO: Container coredns ready: true, restart count 0 W0322 00:34:38.315394 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:34:38.408: INFO: Latency metrics for node latest-control-plane Mar 22 00:34:38.408: INFO: Logging node info for node latest-worker Mar 22 00:34:38.412: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6997713 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:07:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:33:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:33:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:33:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:33:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:34:38.413: INFO: Logging kubelet events for node latest-worker Mar 22 00:34:38.418: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:34:38.426: INFO: startup-68a57213-12f5-4cbd-92c9-c29739439f99 started at 2021-03-22 00:31:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container busybox ready: false, restart count 0 Mar 22 00:34:38.426: INFO: pod2 started at 2021-03-22 00:33:30 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container agnhost ready: false, restart count 0 Mar 22 00:34:38.426: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:34:38.426: INFO: e2e-host-exec started at 2021-03-22 00:33:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container e2e-host-exec ready: false, restart count 0 Mar 22 00:34:38.426: INFO: pod1 started at 2021-03-22 00:33:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container agnhost ready: false, restart count 0 Mar 22 00:34:38.426: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:34:38.426: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:34:38.426: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:34:38.426: INFO: pod3 started at 2021-03-22 00:33:42 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.426: INFO: Container agnhost ready: false, restart count 0 W0322 00:34:38.432550 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:34:38.687: INFO: Latency metrics for node latest-worker Mar 22 00:34:38.687: INFO: Logging node info for node latest-worker2 Mar 22 00:34:38.690: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6998202 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8533":"csi-mock-csi-mock-volumes-8533","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:31:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:32:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:32:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:32:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:32:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:32:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:34:38.691: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:34:38.696: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:34:38.718: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.718: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:34:38.718: INFO: csi-mockplugin-0 started at 2021-03-22 00:34:28 +0000 UTC (0+3 container statuses recorded) Mar 22 00:34:38.718: INFO: Container csi-provisioner ready: true, restart count 0 Mar 22 00:34:38.718: INFO: Container driver-registrar ready: true, restart count 0 Mar 22 00:34:38.718: INFO: Container mock ready: true, restart count 0 Mar 22 00:34:38.718: INFO: csi-mockplugin-attacher-0 started at 2021-03-22 00:34:28 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.718: INFO: Container csi-attacher ready: true, restart count 0 Mar 22 00:34:38.718: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.718: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:34:38.718: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:34:38.718: INFO: Container kindnet-cni ready: true, restart count 0 W0322 00:34:38.723344 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:34:38.992: INFO: Latency metrics for node latest-worker2 Mar 22 00:34:38.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-4065" for this suite. • Failure [0.857 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have Endpoints and EndpointSlices pointing to API Server [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:34:38.239: error creating EndpointSlice resource Unexpected error: <*errors.StatusError | 0xc00107f360>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:70 ------------------------------ {"msg":"FAILED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":330,"completed":132,"skipped":2251,"failed":7,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:39.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: Mar 22 00:34:39.157: INFO: Waiting up to 5m0s for pod "test-pod-e6080857-85a1-41d5-8299-95639acc465f" in namespace "svcaccounts-1162" to be "Succeeded or Failed" Mar 22 00:34:39.178: INFO: Pod "test-pod-e6080857-85a1-41d5-8299-95639acc465f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.778658ms Mar 22 00:34:41.324: INFO: Pod "test-pod-e6080857-85a1-41d5-8299-95639acc465f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166923877s Mar 22 00:34:43.328: INFO: Pod "test-pod-e6080857-85a1-41d5-8299-95639acc465f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171033247s Mar 22 00:34:45.333: INFO: Pod "test-pod-e6080857-85a1-41d5-8299-95639acc465f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.176075829s STEP: Saw pod success Mar 22 00:34:45.333: INFO: Pod "test-pod-e6080857-85a1-41d5-8299-95639acc465f" satisfied condition "Succeeded or Failed" Mar 22 00:34:45.337: INFO: Trying to get logs from node latest-worker2 pod test-pod-e6080857-85a1-41d5-8299-95639acc465f container agnhost-container: STEP: delete the pod Mar 22 00:34:45.373: INFO: Waiting for pod test-pod-e6080857-85a1-41d5-8299-95639acc465f to disappear Mar 22 00:34:45.423: INFO: Pod test-pod-e6080857-85a1-41d5-8299-95639acc465f no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:34:45.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1162" for this suite. • [SLOW TEST:6.404 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":330,"completed":133,"skipped":2261,"failed":7,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:34:45.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 22 00:34:45.646: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:45.649: INFO: Number of nodes with available pods: 0 Mar 22 00:34:45.649: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:46.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:46.658: INFO: Number of nodes with available pods: 0 Mar 22 00:34:46.658: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:48.450: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:48.453: INFO: Number of nodes with available pods: 0 Mar 22 00:34:48.453: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:48.809: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:48.849: INFO: Number of nodes with available pods: 0 Mar 22 00:34:48.849: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:49.654: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:49.700: INFO: Number of nodes with available pods: 0 Mar 22 00:34:49.700: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:50.707: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:50.723: INFO: Number of nodes with available pods: 0 Mar 22 00:34:50.723: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:51.655: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:51.659: INFO: Number of nodes with available pods: 2 Mar 22 00:34:51.659: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 22 00:34:51.738: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:51.743: INFO: Number of nodes with available pods: 1 Mar 22 00:34:51.743: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:52.779: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:52.782: INFO: Number of nodes with available pods: 1 Mar 22 00:34:52.782: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:53.905: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:53.909: INFO: Number of nodes with available pods: 1 Mar 22 00:34:53.909: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:54.764: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:54.767: INFO: Number of nodes with available pods: 1 Mar 22 00:34:54.767: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:55.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:55.753: INFO: Number of nodes with available pods: 1 Mar 22 00:34:55.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:56.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:56.753: INFO: Number of nodes with available pods: 1 Mar 22 00:34:56.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:57.780: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:57.783: INFO: Number of nodes with available pods: 1 Mar 22 00:34:57.783: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:58.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:58.753: INFO: Number of nodes with available pods: 1 Mar 22 00:34:58.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:34:59.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:34:59.751: INFO: Number of nodes with available pods: 1 Mar 22 00:34:59.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:00.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:00.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:00.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:01.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:01.752: INFO: Number of nodes with available pods: 1 Mar 22 00:35:01.752: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:02.768: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:02.771: INFO: Number of nodes with available pods: 1 Mar 22 00:35:02.771: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:03.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:03.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:03.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:04.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:04.750: INFO: Number of nodes with available pods: 1 Mar 22 00:35:04.750: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:05.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:05.757: INFO: Number of nodes with available pods: 1 Mar 22 00:35:05.757: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:06.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:06.752: INFO: Number of nodes with available pods: 1 Mar 22 00:35:06.752: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:07.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:07.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:07.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:08.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:08.752: INFO: Number of nodes with available pods: 1 Mar 22 00:35:08.752: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:09.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:09.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:09.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:10.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:10.752: INFO: Number of nodes with available pods: 1 Mar 22 00:35:10.752: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:11.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:11.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:11.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:12.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:12.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:12.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:13.755: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:13.760: INFO: Number of nodes with available pods: 1 Mar 22 00:35:13.760: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:14.774: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:14.823: INFO: Number of nodes with available pods: 1 Mar 22 00:35:14.823: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:15.792: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:15.795: INFO: Number of nodes with available pods: 1 Mar 22 00:35:15.795: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:16.757: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:16.762: INFO: Number of nodes with available pods: 1 Mar 22 00:35:16.762: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:17.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:17.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:17.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:18.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:18.756: INFO: Number of nodes with available pods: 1 Mar 22 00:35:18.756: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:19.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:19.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:19.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:20.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:20.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:20.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:21.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:21.752: INFO: Number of nodes with available pods: 1 Mar 22 00:35:21.752: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:22.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:22.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:22.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:23.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:23.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:23.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:24.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:24.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:24.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:25.755: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:25.758: INFO: Number of nodes with available pods: 1 Mar 22 00:35:25.758: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:26.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:26.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:26.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:27.751: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:27.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:27.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:28.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:28.839: INFO: Number of nodes with available pods: 1 Mar 22 00:35:28.839: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:29.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:29.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:29.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:30.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:30.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:30.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:31.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:31.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:31.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:32.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:32.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:32.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:33.779: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:33.782: INFO: Number of nodes with available pods: 1 Mar 22 00:35:33.782: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:34.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:34.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:34.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:35.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:35.757: INFO: Number of nodes with available pods: 1 Mar 22 00:35:35.757: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:36.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:36.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:36.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:37.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:37.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:37.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:38.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:38.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:38.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:39.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:39.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:39.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:40.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:40.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:40.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:41.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:41.751: INFO: Number of nodes with available pods: 1 Mar 22 00:35:41.751: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:42.813: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:42.845: INFO: Number of nodes with available pods: 1 Mar 22 00:35:42.845: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:43.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:43.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:43.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:44.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:44.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:44.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:45.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:45.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:45.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:46.815: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:46.819: INFO: Number of nodes with available pods: 1 Mar 22 00:35:46.819: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:47.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:47.758: INFO: Number of nodes with available pods: 1 Mar 22 00:35:47.758: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:48.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:48.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:48.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:49.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:49.754: INFO: Number of nodes with available pods: 1 Mar 22 00:35:49.754: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:50.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:50.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:50.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:51.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:51.752: INFO: Number of nodes with available pods: 1 Mar 22 00:35:51.752: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:52.749: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:52.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:52.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:53.750: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:53.753: INFO: Number of nodes with available pods: 1 Mar 22 00:35:53.753: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:54.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:54.882: INFO: Number of nodes with available pods: 1 Mar 22 00:35:54.882: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:55.748: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:55.768: INFO: Number of nodes with available pods: 1 Mar 22 00:35:55.768: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:56.753: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:56.756: INFO: Number of nodes with available pods: 1 Mar 22 00:35:56.757: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:57.845: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:57.849: INFO: Number of nodes with available pods: 1 Mar 22 00:35:57.849: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:58.830: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:58.917: INFO: Number of nodes with available pods: 1 Mar 22 00:35:58.917: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:35:59.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:35:59.750: INFO: Number of nodes with available pods: 1 Mar 22 00:35:59.750: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:36:00.747: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:36:00.750: INFO: Number of nodes with available pods: 2 Mar 22 00:36:00.750: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-833, will wait for the garbage collector to delete the pods Mar 22 00:36:00.823: INFO: Deleting DaemonSet.extensions daemon-set took: 16.523702ms Mar 22 00:36:01.724: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.895408ms Mar 22 00:36:55.338: INFO: Number of nodes with available pods: 0 Mar 22 00:36:55.338: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 00:36:55.341: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"6999052"},"items":null} Mar 22 00:36:55.344: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"6999052"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:55.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-833" for this suite. • [SLOW TEST:129.950 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":330,"completed":134,"skipped":2269,"failed":7,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:36:55.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:55.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2164" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":330,"completed":135,"skipped":2290,"failed":7,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:36:55.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint Mar 22 00:36:55.638: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:36:57.642: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:36:59.640: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:37:01.640: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:37:03.641: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:37:05.641: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:37:07.641: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:37:07.642: INFO: Error listing EndpointSlices: the server could not find the requested resource Mar 22 00:37:07.642: FAIL: Did not find matching EndpointSlice for endpointslicemirroring-342/example-custom-endpoints: timed out waiting for the condition Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:79 +0x2e5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslicemirroring-342". STEP: Found 0 events. Mar 22 00:37:07.649: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:37:07.649: INFO: Mar 22 00:37:07.740: INFO: Logging node info for node latest-control-plane Mar 22 00:37:07.743: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6998216 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:34:38 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:34:38 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:34:38 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:34:38 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:37:07.744: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:37:07.750: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:37:07.772: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:37:07.772: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:37:07.772: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:37:07.772: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:37:07.772: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container coredns ready: true, restart count 0 Mar 22 00:37:07.772: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:37:07.772: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container etcd ready: true, restart count 0 Mar 22 00:37:07.772: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:37:07.772: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.772: INFO: Container coredns ready: true, restart count 0 W0322 00:37:07.777922 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:37:07.875: INFO: Latency metrics for node latest-control-plane Mar 22 00:37:07.875: INFO: Logging node info for node latest-worker Mar 22 00:37:07.879: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6999185 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:35:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:36:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:36:18 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:36:18 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:36:18 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:36:18 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:37:07.880: INFO: Logging kubelet events for node latest-worker Mar 22 00:37:07.887: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:37:07.894: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.894: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:37:07.894: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.894: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:37:07.894: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.894: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:37:07.894: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.894: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:37:07.894: INFO: csi-mockplugin-0 started at 2021-03-22 00:35:42 +0000 UTC (0+3 container statuses recorded) Mar 22 00:37:07.894: INFO: Container csi-provisioner ready: true, restart count 0 Mar 22 00:37:07.894: INFO: Container driver-registrar ready: true, restart count 0 Mar 22 00:37:07.894: INFO: Container mock ready: true, restart count 0 Mar 22 00:37:07.894: INFO: csi-mockplugin-attacher-0 started at 2021-03-22 00:35:42 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:07.894: INFO: Container csi-attacher ready: true, restart count 0 W0322 00:37:07.899869 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:37:08.108: INFO: Latency metrics for node latest-worker Mar 22 00:37:08.108: INFO: Logging node info for node latest-worker2 Mar 22 00:37:08.112: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6998474 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:34:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:34:58 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:34:58 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:34:58 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:34:58 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:37:08.113: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:37:08.117: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:37:08.122: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:08.122: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:37:08.122: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:08.122: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:37:08.122: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:37:08.122: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:37:08.122: INFO: privileged-pod started at 2021-03-22 00:36:27 +0000 UTC (0+2 container statuses recorded) Mar 22 00:37:08.122: INFO: Container not-privileged-container ready: true, restart count 0 Mar 22 00:37:08.122: INFO: Container privileged-container ready: true, restart count 0 W0322 00:37:08.127533 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:37:08.371: INFO: Latency metrics for node latest-worker2 Mar 22 00:37:08.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-342" for this suite. • Failure [12.918 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:37:07.642: Did not find matching EndpointSlice for endpointslicemirroring-342/example-custom-endpoints: timed out waiting for the condition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:442 ------------------------------ {"msg":"FAILED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":330,"completed":135,"skipped":2300,"failed":8,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:37:08.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:38:08.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-783" for this suite. • [SLOW TEST:60.405 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":330,"completed":136,"skipped":2311,"failed":8,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:38:08.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:38:08.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 22 00:38:09.502: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-22T00:38:09Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-22T00:38:09Z]] name:name1 resourceVersion:6999421 uid:1653c13c-4996-4bb3-8dc6-48a81bb58ce9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 22 00:38:19.513: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-22T00:38:19Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-22T00:38:19Z]] name:name2 resourceVersion:6999459 uid:95c9298b-1297-4d60-94fa-8d6152f3a6d5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 22 00:38:29.522: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-22T00:38:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-22T00:38:29Z]] name:name1 resourceVersion:6999481 uid:1653c13c-4996-4bb3-8dc6-48a81bb58ce9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 22 00:38:39.530: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-22T00:38:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-22T00:38:39Z]] name:name2 resourceVersion:6999503 uid:95c9298b-1297-4d60-94fa-8d6152f3a6d5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 22 00:38:49.542: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-22T00:38:09Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-22T00:38:29Z]] name:name1 resourceVersion:6999523 uid:1653c13c-4996-4bb3-8dc6-48a81bb58ce9] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 22 00:38:59.600: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-03-22T00:38:19Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-03-22T00:38:39Z]] name:name2 resourceVersion:6999579 uid:95c9298b-1297-4d60-94fa-8d6152f3a6d5] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:39:10.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5866" for this suite. • [SLOW TEST:61.383 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":330,"completed":137,"skipped":2321,"failed":8,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:39:10.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 22 00:39:10.236: INFO: Waiting up to 5m0s for pod "downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4" in namespace "downward-api-4397" to be "Succeeded or Failed" Mar 22 00:39:10.250: INFO: Pod "downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.488683ms Mar 22 00:39:12.336: INFO: Pod "downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100228261s Mar 22 00:39:14.340: INFO: Pod "downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4": Phase="Running", Reason="", readiness=true. Elapsed: 4.104359039s Mar 22 00:39:16.344: INFO: Pod "downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108404833s STEP: Saw pod success Mar 22 00:39:16.344: INFO: Pod "downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4" satisfied condition "Succeeded or Failed" Mar 22 00:39:16.347: INFO: Trying to get logs from node latest-worker pod downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4 container dapi-container: STEP: delete the pod Mar 22 00:39:16.477: INFO: Waiting for pod downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4 to disappear Mar 22 00:39:16.491: INFO: Pod downward-api-d80a5906-8d17-4873-a89d-a748b62a06d4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:39:16.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4397" for this suite. • [SLOW TEST:6.335 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":330,"completed":138,"skipped":2341,"failed":8,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:39:16.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 00:39:17.365: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 00:39:19.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:39:21.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751970357, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 00:39:25.076: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:39:25.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6455" for this suite. STEP: Destroying namespace "webhook-6455-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.248 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":330,"completed":139,"skipped":2350,"failed":8,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:39:25.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6829 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 00:39:25.899: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 22 00:39:26.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:39:28.047: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:39:30.082: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:39:32.044: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:39:34.045: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:39:36.045: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:39:38.043: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:39:40.051: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:39:42.069: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:39:44.044: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 22 00:39:44.048: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 22 00:39:48.220: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 22 00:39:48.220: INFO: Going to poll 10.244.2.118 on port 8080 at least 0 times, with a maximum of 34 tries before failing Mar 22 00:39:48.222: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.118:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6829 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:39:48.222: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:39:48.321: INFO: Found all 1 expected endpoints: [netserver-0] Mar 22 00:39:48.321: INFO: Going to poll 10.244.1.132 on port 8080 at least 0 times, with a maximum of 34 tries before failing Mar 22 00:39:48.323: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.132:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6829 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:39:48.323: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:39:48.426: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:39:48.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6829" for this suite. • [SLOW TEST:22.683 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":140,"skipped":2359,"failed":8,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:39:48.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-859 STEP: creating service affinity-nodeport in namespace services-859 STEP: creating replication controller affinity-nodeport in namespace services-859 I0322 00:39:48.788250 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-859, replica count: 3 I0322 00:39:51.839390 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:39:54.840164 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:39:57.840452 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 00:39:57.851: INFO: Creating new exec pod E0322 00:40:01.914314 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:40:03.052134 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:40:04.915851 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:40:10.569358 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:40:23.273382 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:40:43.843632 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:41:25.463702 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 00:42:01.913: FAIL: Unexpected error: <*errors.errorString | 0xc0033b4540>: { s: "no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000f59760, 0x73e8b88, 0xc00379a2c0, 0xc0002a7b80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2522 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 00:42:01.914: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-859, will wait for the garbage collector to delete the pods Mar 22 00:42:02.011: INFO: Deleting ReplicationController affinity-nodeport took: 8.175912ms Mar 22 00:42:02.812: INFO: Terminating ReplicationController affinity-nodeport pods took: 800.362116ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-859". STEP: Found 23 events. Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:48 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-68sxk Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:49 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-gmgd8 Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:49 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-hrcw9 Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:49 +0000 UTC - event for affinity-nodeport-68sxk: {default-scheduler } Scheduled: Successfully assigned services-859/affinity-nodeport-68sxk to latest-worker2 Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:49 +0000 UTC - event for affinity-nodeport-gmgd8: {default-scheduler } Scheduled: Successfully assigned services-859/affinity-nodeport-gmgd8 to latest-worker2 Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:49 +0000 UTC - event for affinity-nodeport-hrcw9: {default-scheduler } Scheduled: Successfully assigned services-859/affinity-nodeport-hrcw9 to latest-worker2 Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:51 +0000 UTC - event for affinity-nodeport-68sxk: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:51 +0000 UTC - event for affinity-nodeport-gmgd8: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:51 +0000 UTC - event for affinity-nodeport-hrcw9: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:53 +0000 UTC - event for affinity-nodeport-68sxk: {kubelet latest-worker2} Created: Created container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:53 +0000 UTC - event for affinity-nodeport-gmgd8: {kubelet latest-worker2} Created: Created container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:53 +0000 UTC - event for affinity-nodeport-hrcw9: {kubelet latest-worker2} Created: Created container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:54 +0000 UTC - event for affinity-nodeport-68sxk: {kubelet latest-worker2} Started: Started container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:54 +0000 UTC - event for affinity-nodeport-gmgd8: {kubelet latest-worker2} Started: Started container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:54 +0000 UTC - event for affinity-nodeport-hrcw9: {kubelet latest-worker2} Started: Started container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:57 +0000 UTC - event for execpod-affinityjftqj: {default-scheduler } Scheduled: Successfully assigned services-859/execpod-affinityjftqj to latest-worker2 Mar 22 00:42:35.264: INFO: At 2021-03-22 00:39:59 +0000 UTC - event for execpod-affinityjftqj: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:42:35.264: INFO: At 2021-03-22 00:40:00 +0000 UTC - event for execpod-affinityjftqj: {kubelet latest-worker2} Started: Started container agnhost-container Mar 22 00:42:35.264: INFO: At 2021-03-22 00:40:00 +0000 UTC - event for execpod-affinityjftqj: {kubelet latest-worker2} Created: Created container agnhost-container Mar 22 00:42:35.264: INFO: At 2021-03-22 00:42:01 +0000 UTC - event for execpod-affinityjftqj: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 22 00:42:35.264: INFO: At 2021-03-22 00:42:02 +0000 UTC - event for affinity-nodeport-68sxk: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:42:02 +0000 UTC - event for affinity-nodeport-gmgd8: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport Mar 22 00:42:35.264: INFO: At 2021-03-22 00:42:02 +0000 UTC - event for affinity-nodeport-hrcw9: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport Mar 22 00:42:35.267: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:42:35.267: INFO: Mar 22 00:42:35.271: INFO: Logging node info for node latest-control-plane Mar 22 00:42:35.275: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6999964 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:39 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:39 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:39 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:39:39 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:42:35.275: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:42:35.280: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:42:35.302: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.302: INFO: Container etcd ready: true, restart count 0 Mar 22 00:42:35.302: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.302: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:42:35.302: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.302: INFO: Container coredns ready: true, restart count 0 Mar 22 00:42:35.302: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.302: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:42:35.302: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.303: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:42:35.303: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.303: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:42:35.303: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.303: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:42:35.303: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.303: INFO: Container coredns ready: true, restart count 0 Mar 22 00:42:35.303: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.303: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:42:35.308583 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:42:35.395: INFO: Latency metrics for node latest-control-plane Mar 22 00:42:35.395: INFO: Logging node info for node latest-worker Mar 22 00:42:35.399: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7000218 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:42:35.400: INFO: Logging kubelet events for node latest-worker Mar 22 00:42:35.406: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:42:35.413: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.413: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:42:35.413: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.413: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:42:35.413: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.413: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:42:35.413: INFO: hostexec-latest-worker-cx2kk started at 2021-03-22 00:42:31 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.413: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:42:35.413: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.413: INFO: Container kube-proxy ready: true, restart count 0 W0322 00:42:35.424653 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:42:35.648: INFO: Latency metrics for node latest-worker Mar 22 00:42:35.648: INFO: Logging node info for node latest-worker2 Mar 22 00:42:35.651: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7000114 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:34:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:39:59 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:42:35.651: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:42:35.657: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:42:35.673: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.673: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:42:35.673: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.673: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:42:35.673: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:42:35.673: INFO: Container chaos-daemon ready: true, restart count 0 W0322 00:42:35.678103 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:42:35.897: INFO: Latency metrics for node latest-worker2 Mar 22 00:42:35.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-859" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [167.476 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:42:01.913: Unexpected error: <*errors.errorString | 0xc0033b4540>: { s: "no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":330,"completed":140,"skipped":2375,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:35.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 22 00:42:36.060: INFO: The status of Pod annotationupdate5cb7b48f-423d-401e-b68f-0056b8b4981e is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:42:38.065: INFO: The status of Pod annotationupdate5cb7b48f-423d-401e-b68f-0056b8b4981e is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:42:40.065: INFO: The status of Pod annotationupdate5cb7b48f-423d-401e-b68f-0056b8b4981e is Running (Ready = true) Mar 22 00:42:40.597: INFO: Successfully updated pod "annotationupdate5cb7b48f-423d-401e-b68f-0056b8b4981e" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:44.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9823" for this suite. • [SLOW TEST:8.740 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":330,"completed":141,"skipped":2381,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:44.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-50b16ec1-5853-4d1c-9bbe-f92f162267f8 STEP: Creating a pod to test consume secrets Mar 22 00:42:44.788: INFO: Waiting up to 5m0s for pod "pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54" in namespace "secrets-8639" to be "Succeeded or Failed" Mar 22 00:42:44.804: INFO: Pod "pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54": Phase="Pending", Reason="", readiness=false. Elapsed: 15.541028ms Mar 22 00:42:46.807: INFO: Pod "pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019213143s Mar 22 00:42:48.857: INFO: Pod "pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54": Phase="Running", Reason="", readiness=true. Elapsed: 4.068882089s Mar 22 00:42:51.092: INFO: Pod "pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.304133263s STEP: Saw pod success Mar 22 00:42:51.092: INFO: Pod "pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54" satisfied condition "Succeeded or Failed" Mar 22 00:42:51.125: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54 container secret-volume-test: STEP: delete the pod Mar 22 00:42:51.216: INFO: Waiting for pod pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54 to disappear Mar 22 00:42:51.292: INFO: Pod pod-secrets-b694e34f-d877-469e-930d-6f784ac68e54 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:51.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8639" for this suite. • [SLOW TEST:6.651 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":142,"skipped":2384,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:51.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events Mar 22 00:42:51.945: INFO: created test-event-1 Mar 22 00:42:51.969: INFO: created test-event-2 Mar 22 00:42:52.047: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Mar 22 00:42:52.071: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Mar 22 00:42:52.342: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:52.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9867" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":330,"completed":143,"skipped":2400,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:52.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 22 00:42:52.434: INFO: Waiting up to 5m0s for pod "downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710" in namespace "downward-api-9646" to be "Succeeded or Failed" Mar 22 00:42:52.474: INFO: Pod "downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710": Phase="Pending", Reason="", readiness=false. Elapsed: 39.119563ms Mar 22 00:42:54.671: INFO: Pod "downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236978059s Mar 22 00:42:56.675: INFO: Pod "downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710": Phase="Running", Reason="", readiness=true. Elapsed: 4.240547216s Mar 22 00:42:58.681: INFO: Pod "downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.246470852s STEP: Saw pod success Mar 22 00:42:58.681: INFO: Pod "downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710" satisfied condition "Succeeded or Failed" Mar 22 00:42:58.684: INFO: Trying to get logs from node latest-worker2 pod downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710 container dapi-container: STEP: delete the pod Mar 22 00:42:58.763: INFO: Waiting for pod downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710 to disappear Mar 22 00:42:58.777: INFO: Pod downward-api-4751e100-f6f6-4c9a-b43b-f17cf18ce710 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:42:58.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9646" for this suite. • [SLOW TEST:6.431 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":330,"completed":144,"skipped":2408,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:42:58.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 22 00:42:59.130: INFO: Waiting up to 1m0s for all nodes to be ready Mar 22 00:43:59.154: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:43:59.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Mar 22 00:44:05.378: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:44:25.999: INFO: pods created so far: [1 1 1] Mar 22 00:44:25.999: INFO: length of pods created so far: 3 Mar 22 00:45:38.021: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:45.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4210" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:45:45.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6477" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:166.423 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":330,"completed":145,"skipped":2409,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:45:45.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6901 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 00:45:45.292: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 22 00:45:45.398: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:45:47.755: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:45:49.554: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:45:51.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:45:53.429: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:45:55.598: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:45:57.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:45:59.401: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:46:01.404: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:46:03.403: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:46:05.403: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:46:07.402: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:46:09.412: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 22 00:46:09.444: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 22 00:46:13.493: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 22 00:46:13.493: INFO: Breadth first check of 10.244.2.132 on host 172.18.0.9... Mar 22 00:46:13.496: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.133:9080/dial?request=hostname&protocol=udp&host=10.244.2.132&port=8081&tries=1'] Namespace:pod-network-test-6901 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:46:13.496: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:46:13.618: INFO: Waiting for responses: map[] Mar 22 00:46:13.618: INFO: reached 10.244.2.132 after 0/1 tries Mar 22 00:46:13.618: INFO: Breadth first check of 10.244.1.146 on host 172.18.0.13... Mar 22 00:46:13.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.133:9080/dial?request=hostname&protocol=udp&host=10.244.1.146&port=8081&tries=1'] Namespace:pod-network-test-6901 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:46:13.621: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:46:13.711: INFO: Waiting for responses: map[] Mar 22 00:46:13.711: INFO: reached 10.244.1.146 after 0/1 tries Mar 22 00:46:13.711: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:46:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6901" for this suite. • [SLOW TEST:28.515 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":330,"completed":146,"skipped":2433,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:46:13.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:46:13.841: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544" in namespace "security-context-test-1631" to be "Succeeded or Failed" Mar 22 00:46:13.847: INFO: Pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544": Phase="Pending", Reason="", readiness=false. Elapsed: 5.700012ms Mar 22 00:46:15.985: INFO: Pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144078777s Mar 22 00:46:17.989: INFO: Pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147785035s Mar 22 00:46:20.249: INFO: Pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.407751786s Mar 22 00:46:20.249: INFO: Pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544" satisfied condition "Succeeded or Failed" Mar 22 00:46:20.293: INFO: Got logs for pod "busybox-privileged-false-d4d0e557-2e71-43b1-818f-102fe7390544": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:46:20.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1631" for this suite. • [SLOW TEST:6.615 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":147,"skipped":2444,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:46:20.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all Mar 22 00:46:21.053: INFO: Waiting up to 5m0s for pod "client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d" in namespace "containers-4486" to be "Succeeded or Failed" Mar 22 00:46:21.358: INFO: Pod "client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 305.255048ms Mar 22 00:46:23.425: INFO: Pod "client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371805308s Mar 22 00:46:25.459: INFO: Pod "client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d": Phase="Running", Reason="", readiness=true. Elapsed: 4.405684333s Mar 22 00:46:27.549: INFO: Pod "client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.49574804s STEP: Saw pod success Mar 22 00:46:27.549: INFO: Pod "client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d" satisfied condition "Succeeded or Failed" Mar 22 00:46:27.585: INFO: Trying to get logs from node latest-worker pod client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d container agnhost-container: STEP: delete the pod Mar 22 00:46:27.749: INFO: Waiting for pod client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d to disappear Mar 22 00:46:27.813: INFO: Pod client-containers-fd25cf73-9bae-4c21-b6af-ce98da396e1d no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:46:27.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4486" for this suite. • [SLOW TEST:7.505 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":330,"completed":148,"skipped":2454,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:46:27.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:46:28.097: INFO: Creating deployment "webserver-deployment" Mar 22 00:46:28.137: INFO: Waiting for observed generation 1 Mar 22 00:46:30.183: INFO: Waiting for all required pods to come up Mar 22 00:46:30.188: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 22 00:46:46.257: INFO: Waiting for deployment "webserver-deployment" to complete Mar 22 00:46:46.262: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 22 00:46:46.270: INFO: Updating deployment webserver-deployment Mar 22 00:46:46.270: INFO: Waiting for observed generation 2 Mar 22 00:46:48.700: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 22 00:46:48.703: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 22 00:46:48.706: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 22 00:46:48.713: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 22 00:46:48.713: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 22 00:46:48.715: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 22 00:46:48.719: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 22 00:46:48.719: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 22 00:46:48.726: INFO: Updating deployment webserver-deployment Mar 22 00:46:48.726: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 22 00:46:49.062: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 22 00:46:51.364: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 22 00:46:52.816: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4480 ab0aebcd-08de-4865-b583-c59c1478db55 7002613 3 2021-03-22 00:46:28 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003cbd648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-03-22 00:46:49 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-03-22 00:46:50 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 22 00:46:53.184: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-4480 8662cc26-8e81-46d9-bf21-a739e4435eb0 7002610 3 2021-03-22 00:46:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ab0aebcd-08de-4865-b583-c59c1478db55 0xc003cbda27 0xc003cbda28}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab0aebcd-08de-4865-b583-c59c1478db55\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003cbdaa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:46:53.184: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 22 00:46:53.184: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-4480 7a3b9657-d963-44ea-a0ab-c6064ce1026f 7002606 3 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ab0aebcd-08de-4865-b583-c59c1478db55 0xc003cbdb07 0xc003cbdb08}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:46:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ab0aebcd-08de-4865-b583-c59c1478db55\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003cbdb78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:46:53.430: INFO: Pod "webserver-deployment-795d758f88-4jd85" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4jd85 webserver-deployment-795d758f88- deployment-4480 5edde9d3-fb1c-4584-a0c7-c62b008d1ae6 7002647 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc003cbdfe7 0xc003cbdfe8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.431: INFO: Pod "webserver-deployment-795d758f88-77qgd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-77qgd webserver-deployment-795d758f88- deployment-4480 7954d3d5-b6e6-4ca8-9ac1-a2a834e22bda 7002674 0 2021-03-22 00:46:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88197 0xc002d88198}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.151,StartTime:2021-03-22 00:46:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.431: INFO: Pod "webserver-deployment-795d758f88-7hh24" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7hh24 webserver-deployment-795d758f88- deployment-4480 e1a615e3-7ccc-45cb-bfe9-214dd07d6c5f 7002659 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88387 0xc002d88388}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.431: INFO: Pod "webserver-deployment-795d758f88-92d8b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-92d8b webserver-deployment-795d758f88- deployment-4480 adc4b97c-e2cf-4100-af92-3642a106f084 7002641 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88537 0xc002d88538}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.432: INFO: Pod "webserver-deployment-795d758f88-c7qp5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c7qp5 webserver-deployment-795d758f88- deployment-4480 a3b8df3c-9e28-4f72-ae4f-d38c23f5f1ce 7002541 0 2021-03-22 00:46:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d886e7 0xc002d886e8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.432: INFO: Pod "webserver-deployment-795d758f88-hdmmp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hdmmp webserver-deployment-795d758f88- deployment-4480 0e684115-e181-4040-bcf7-56c77134906d 7002526 0 2021-03-22 00:46:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88897 0xc002d88898}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.432: INFO: Pod "webserver-deployment-795d758f88-jv8w8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jv8w8 webserver-deployment-795d758f88- deployment-4480 c7f4e77f-51c9-4149-9dcd-ea2cf929ea42 7002536 0 2021-03-22 00:46:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88a47 0xc002d88a48}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.432: INFO: Pod "webserver-deployment-795d758f88-kfl5k" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kfl5k webserver-deployment-795d758f88- deployment-4480 d7251dd5-5bd1-41b4-9d8f-73d137e3d013 7002637 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88c07 0xc002d88c08}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.433: INFO: Pod "webserver-deployment-795d758f88-mkk9f" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mkk9f webserver-deployment-795d758f88- deployment-4480 272d1982-3620-4f27-89df-f5b78e75514e 7002543 0 2021-03-22 00:46:46 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88dc7 0xc002d88dc8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.433: INFO: Pod "webserver-deployment-795d758f88-r9t2c" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-r9t2c webserver-deployment-795d758f88- deployment-4480 970e8cbc-9d52-405c-8167-baced99a4dee 7002602 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d88f87 0xc002d88f88}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.433: INFO: Pod "webserver-deployment-795d758f88-tcc6l" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tcc6l webserver-deployment-795d758f88- deployment-4480 f4682ba7-bdaa-41d7-ab78-85f30d67d77f 7002623 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d890c7 0xc002d890c8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.433: INFO: Pod "webserver-deployment-795d758f88-w7wpm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-w7wpm webserver-deployment-795d758f88- deployment-4480 1e1b01a3-275c-46ce-821c-b47460d093f9 7002616 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d89277 0xc002d89278}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.433: INFO: Pod "webserver-deployment-795d758f88-xghng" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xghng webserver-deployment-795d758f88- deployment-4480 75cefcb4-c4c3-4620-bba3-3d563f3b5a38 7002669 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 8662cc26-8e81-46d9-bf21-a739e4435eb0 0xc002d89427 0xc002d89428}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8662cc26-8e81-46d9-bf21-a739e4435eb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.434: INFO: Pod "webserver-deployment-847dcfb7fb-24rzn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-24rzn webserver-deployment-847dcfb7fb- deployment-4480 0c361f00-4395-4e2e-a337-cb3138f8e3a7 7002620 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d895d7 0xc002d895d8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.434: INFO: Pod "webserver-deployment-847dcfb7fb-4qvjg" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4qvjg webserver-deployment-847dcfb7fb- deployment-4480 84d2db7c-2042-49f3-8331-fb3010e34a22 7002455 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d89767 0xc002d89768}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.139,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://273be4d4294662dbc7906d5731dbc45001dbf01b2d3a9d67ad97de4030b77904,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.434: INFO: Pod "webserver-deployment-847dcfb7fb-8dp2k" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8dp2k webserver-deployment-847dcfb7fb- deployment-4480 8c66abf1-1ea9-4688-b5e4-f109281771a2 7002652 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d89917 0xc002d89918}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.434: INFO: Pod "webserver-deployment-847dcfb7fb-cxqrn" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-cxqrn webserver-deployment-847dcfb7fb- deployment-4480 0b74f61e-c3e7-4747-824e-9f574b22891a 7002424 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d89aa7 0xc002d89aa8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.137\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.137,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:38 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://82676743352eccbbe3170ca11579a9c10bb430a8317f5df901407eb23825f8d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.434: INFO: Pod "webserver-deployment-847dcfb7fb-fdnz4" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-fdnz4 webserver-deployment-847dcfb7fb- deployment-4480 ff0edde1-5714-4a0b-946c-c6a5f2b3d039 7002658 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d89c67 0xc002d89c68}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.435: INFO: Pod "webserver-deployment-847dcfb7fb-gqlvj" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gqlvj webserver-deployment-847dcfb7fb- deployment-4480 f70cc4dc-a204-4d81-a41d-9ae7b3af717a 7002632 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d89e17 0xc002d89e18}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.435: INFO: Pod "webserver-deployment-847dcfb7fb-hckjc" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hckjc webserver-deployment-847dcfb7fb- deployment-4480 6d793e6b-86a9-4a05-8a4e-0e2f1fc53bbc 7002629 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc002d89fa7 0xc002d89fa8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.435: INFO: Pod "webserver-deployment-847dcfb7fb-hg4hz" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hg4hz webserver-deployment-847dcfb7fb- deployment-4480 828f5078-7e44-4922-aa4c-5466380ec4d6 7002614 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596137 0xc003596138}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.435: INFO: Pod "webserver-deployment-847dcfb7fb-jljtc" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jljtc webserver-deployment-847dcfb7fb- deployment-4480 36209640-9a64-4cba-9855-6f803c016775 7002650 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc0035962c7 0xc0035962c8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.436: INFO: Pod "webserver-deployment-847dcfb7fb-kn5hj" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-kn5hj webserver-deployment-847dcfb7fb- deployment-4480 300103e3-b558-4f36-a5eb-c5f93f16d99d 7002428 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596457 0xc003596458}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.150\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.150,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://77561196b44fcd1df813013b14d0c30ff733ee1dbd446c6cb6cf48b70465a12c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.436: INFO: Pod "webserver-deployment-847dcfb7fb-nxcdv" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nxcdv webserver-deployment-847dcfb7fb- deployment-4480 fb3eb65d-ac70-48c9-974a-dba13d71179a 7002409 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596607 0xc003596608}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.136\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.136,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://8dd5e99e7942b43cb76da4c33733c3819e9476bcb65ac2cce7084d50d92d8f17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.136,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.436: INFO: Pod "webserver-deployment-847dcfb7fb-pdnw9" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pdnw9 webserver-deployment-847dcfb7fb- deployment-4480 271767aa-0129-44d2-aaf3-04b3b448fc4a 7002467 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc0035967b7 0xc0035967b8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.141\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.141,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://2fbc259cee6bedadb66216557b6abf2f16ea3cbb9cc3b2ec4df883e458d23129,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.141,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.436: INFO: Pod "webserver-deployment-847dcfb7fb-pqfn9" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pqfn9 webserver-deployment-847dcfb7fb- deployment-4480 8a5eb5f5-1348-4e6a-be0f-bfc962ec16c8 7002670 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596967 0xc003596968}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.437: INFO: Pod "webserver-deployment-847dcfb7fb-s4nng" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s4nng webserver-deployment-847dcfb7fb- deployment-4480 5b926c61-6041-41a0-9256-bde3067a3f27 7002439 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596af7 0xc003596af8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.138,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://f6e68db699c04cf2ba32cd1d7ff7df8a1c6248ce2839de8dedd326ffda8600d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.437: INFO: Pod "webserver-deployment-847dcfb7fb-tkpc4" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-tkpc4 webserver-deployment-847dcfb7fb- deployment-4480 2f9abca6-3045-490d-9413-40f446a87e27 7002470 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596cb7 0xc003596cb8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.142\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.142,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://49631d4ebf507622f4de0fff56afb06b6ea6dd9f58f08b3102453508efa5c377,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.437: INFO: Pod "webserver-deployment-847dcfb7fb-v8v2p" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-v8v2p webserver-deployment-847dcfb7fb- deployment-4480 8a8bd66c-6f74-4fea-9dce-f1026e016a9d 7002473 0 2021-03-22 00:46:28 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003596e67 0xc003596e68}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.140,StartTime:2021-03-22 00:46:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:46:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://7b9c35c1def138cbc84ee34c0becd535b2f191f915efe138f5fc4c185670e127,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.437: INFO: Pod "webserver-deployment-847dcfb7fb-v9mtn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-v9mtn webserver-deployment-847dcfb7fb- deployment-4480 859aa95d-decc-4435-9142-6d7c16f2c4d9 7002603 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003597217 0xc003597218}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.438: INFO: Pod "webserver-deployment-847dcfb7fb-vhk69" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vhk69 webserver-deployment-847dcfb7fb- deployment-4480 c4ac9a70-2291-4c98-8193-4c534bdd4ca3 7002588 0 2021-03-22 00:46:48 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc0035973c7 0xc0035973c8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.438: INFO: Pod "webserver-deployment-847dcfb7fb-w86mn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-w86mn webserver-deployment-847dcfb7fb- deployment-4480 1a3bb4be-4b26-420e-94c6-f458c8af2ba3 7002675 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc003597727 0xc003597728}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:46:53.438: INFO: Pod "webserver-deployment-847dcfb7fb-xw2md" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-xw2md webserver-deployment-847dcfb7fb- deployment-4480 b330dbcc-2b22-440f-bc44-a17e7916957d 7002654 0 2021-03-22 00:46:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb 7a3b9657-d963-44ea-a0ab-c6064ce1026f 0xc0035979c7 0xc0035979c8}] [] [{kube-controller-manager Update v1 2021-03-22 00:46:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a3b9657-d963-44ea-a0ab-c6064ce1026f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:46:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hdl4g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hdl4g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hdl4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:46:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 00:46:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:46:53.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4480" for this suite. • [SLOW TEST:26.529 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":330,"completed":149,"skipped":2504,"failed":9,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:46:54.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9063 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9063 I0322 00:46:59.317220 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9063, replica count: 2 I0322 00:47:02.367772 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:47:05.368119 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:47:08.369043 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:47:11.369848 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:47:14.370758 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 00:47:14.370: INFO: Creating new exec pod E0322 00:47:20.675076 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:47:21.687535 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:47:24.139473 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:47:30.266797 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:47:38.174050 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:47:59.424715 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:48:31.226430 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:49:19.858907 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 00:49:20.673: FAIL: Unexpected error: <*errors.errorString | 0xc000fcea00>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 00:49:20.674: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-9063". STEP: Found 14 events. Mar 22 00:49:20.753: INFO: At 2021-03-22 00:46:59 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-j4skh Mar 22 00:49:20.753: INFO: At 2021-03-22 00:46:59 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-qmc9p Mar 22 00:49:20.753: INFO: At 2021-03-22 00:46:59 +0000 UTC - event for externalname-service-j4skh: {default-scheduler } Scheduled: Successfully assigned services-9063/externalname-service-j4skh to latest-worker2 Mar 22 00:49:20.753: INFO: At 2021-03-22 00:46:59 +0000 UTC - event for externalname-service-qmc9p: {default-scheduler } Scheduled: Successfully assigned services-9063/externalname-service-qmc9p to latest-worker2 Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:06 +0000 UTC - event for externalname-service-j4skh: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:06 +0000 UTC - event for externalname-service-qmc9p: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:11 +0000 UTC - event for externalname-service-j4skh: {kubelet latest-worker2} Created: Created container externalname-service Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:11 +0000 UTC - event for externalname-service-qmc9p: {kubelet latest-worker2} Created: Created container externalname-service Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:12 +0000 UTC - event for externalname-service-j4skh: {kubelet latest-worker2} Started: Started container externalname-service Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:12 +0000 UTC - event for externalname-service-qmc9p: {kubelet latest-worker2} Started: Started container externalname-service Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:14 +0000 UTC - event for execpod2jm5f: {default-scheduler } Scheduled: Successfully assigned services-9063/execpod2jm5f to latest-worker Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:16 +0000 UTC - event for execpod2jm5f: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:18 +0000 UTC - event for execpod2jm5f: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 00:49:20.753: INFO: At 2021-03-22 00:47:19 +0000 UTC - event for execpod2jm5f: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 00:49:20.756: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:49:20.756: INFO: execpod2jm5f latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:14 +0000 UTC }] Mar 22 00:49:20.756: INFO: externalname-service-j4skh latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:46:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:46:59 +0000 UTC }] Mar 22 00:49:20.756: INFO: externalname-service-qmc9p latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:46:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:47:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 00:46:59 +0000 UTC }] Mar 22 00:49:20.756: INFO: Mar 22 00:49:20.761: INFO: Logging node info for node latest-control-plane Mar 22 00:49:20.765: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7001548 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:44:39 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:44:39 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:44:39 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:44:39 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:49:20.765: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:49:20.771: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:49:20.796: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container etcd ready: true, restart count 0 Mar 22 00:49:20.796: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:49:20.796: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container coredns ready: true, restart count 0 Mar 22 00:49:20.796: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:49:20.796: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:49:20.796: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:49:20.796: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:49:20.796: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container coredns ready: true, restart count 0 Mar 22 00:49:20.796: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.796: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 00:49:20.804374 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:49:20.896: INFO: Latency metrics for node latest-control-plane Mar 22 00:49:20.896: INFO: Logging node info for node latest-worker Mar 22 00:49:20.934: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7001611 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:45:00 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:45:00 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:45:00 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:45:00 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:49:20.934: INFO: Logging kubelet events for node latest-worker Mar 22 00:49:20.939: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:49:20.945: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.945: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:49:20.945: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.945: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:49:20.945: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.945: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:49:20.945: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.945: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:49:20.945: INFO: execpod2jm5f started at 2021-03-22 00:47:14 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:20.945: INFO: Container agnhost-container ready: true, restart count 0 W0322 00:49:20.951317 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:49:21.222: INFO: Latency metrics for node latest-worker Mar 22 00:49:21.222: INFO: Logging node info for node latest-worker2 Mar 22 00:49:21.252: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7003470 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:49:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:49:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:49:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:49:10 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:49:21.253: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:49:21.258: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:49:21.275: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:21.275: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:49:21.275: INFO: hostexec-latest-worker2-jxp2r started at 2021-03-22 00:49:18 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:21.275: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 00:49:21.275: INFO: externalname-service-j4skh started at 2021-03-22 00:46:59 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:21.275: INFO: Container externalname-service ready: true, restart count 0 Mar 22 00:49:21.275: INFO: externalname-service-qmc9p started at 2021-03-22 00:46:59 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:21.275: INFO: Container externalname-service ready: true, restart count 0 Mar 22 00:49:21.275: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:21.275: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:49:21.275: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:49:21.275: INFO: Container kindnet-cni ready: true, restart count 0 W0322 00:49:21.280167 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:49:21.514: INFO: Latency metrics for node latest-worker2 Mar 22 00:49:21.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9063" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [147.153 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:49:20.673: Unexpected error: <*errors.errorString | 0xc000fcea00>: { s: "no subset of available IP address found for the endpoint externalname-service within timeout 2m0s", } no subset of available IP address found for the endpoint externalname-service within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":330,"completed":149,"skipped":2512,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [sig-node] Probing container should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:49:21.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 in namespace container-probe-2266 Mar 22 00:49:25.664: INFO: Started pod busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 in namespace container-probe-2266 Mar 22 00:49:25.664: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (1.263µs elapsed) Mar 22 00:49:27.664: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (2.000249707s elapsed) Mar 22 00:49:29.665: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (4.00120508s elapsed) Mar 22 00:49:31.665: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (6.001508333s elapsed) Mar 22 00:49:33.666: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (8.001999853s elapsed) Mar 22 00:49:35.667: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (10.00324082s elapsed) Mar 22 00:49:37.667: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (12.003402245s elapsed) Mar 22 00:49:39.668: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (14.00456598s elapsed) Mar 22 00:49:41.670: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (16.005640492s elapsed) Mar 22 00:49:43.670: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (18.00581941s elapsed) Mar 22 00:49:45.670: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (20.006454884s elapsed) Mar 22 00:49:47.671: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (22.007204509s elapsed) Mar 22 00:49:49.672: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (24.008177865s elapsed) Mar 22 00:49:51.672: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (26.00851252s elapsed) Mar 22 00:49:53.673: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (28.008786575s elapsed) Mar 22 00:49:55.673: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (30.009370174s elapsed) Mar 22 00:49:57.674: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (32.009979823s elapsed) Mar 22 00:49:59.675: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (34.010786725s elapsed) Mar 22 00:50:01.676: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (36.01174281s elapsed) Mar 22 00:50:03.676: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (38.012582658s elapsed) Mar 22 00:50:05.678: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (40.013664546s elapsed) Mar 22 00:50:07.678: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (42.014377692s elapsed) Mar 22 00:50:09.678: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (44.01454385s elapsed) Mar 22 00:50:11.680: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (46.015623086s elapsed) Mar 22 00:50:13.680: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (48.016604469s elapsed) Mar 22 00:50:15.682: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (50.017702567s elapsed) Mar 22 00:50:17.682: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (52.017805628s elapsed) Mar 22 00:50:19.683: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (54.018792368s elapsed) Mar 22 00:50:21.683: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (56.019418204s elapsed) Mar 22 00:50:23.684: INFO: pod container-probe-2266/busybox-560511ad-47c7-4806-b2d4-8b9ac4f02048 is not ready (58.020426609s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:25.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2266" for this suite. • [SLOW TEST:64.209 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [NodeConformance] [Conformance]","total":330,"completed":150,"skipped":2517,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:25.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 22 00:50:25.838: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 00:50:25.852: INFO: Waiting for terminating namespaces to be deleted... Mar 22 00:50:25.855: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 22 00:50:25.861: INFO: chaos-controller-manager-69c479c674-rdmrr from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.861: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 00:50:25.861: INFO: chaos-daemon-vb9xf from default started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.861: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:50:25.861: INFO: kindnet-l4mzm from kube-system started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.861: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:50:25.861: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.861: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:50:25.861: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 22 00:50:25.865: INFO: chaos-daemon-4zjcg from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.865: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:50:25.865: INFO: kindnet-7qb7q from kube-system started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.865: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:50:25.865: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.865: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:50:25.865: INFO: hostexec-latest-worker2-hbfg6 from persistent-local-volumes-test-9248 started at 2021-03-22 00:50:14 +0000 UTC (1 container statuses recorded) Mar 22 00:50:25.865: INFO: Container agnhost-container ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.166e83cb65fb8e71], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:26.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-963" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":330,"completed":151,"skipped":2525,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:26.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Mar 22 00:50:27.057: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Mar 22 00:50:27.150: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:27.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8649" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":330,"completed":152,"skipped":2560,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:27.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 22 00:50:27.379: INFO: starting watch STEP: patching STEP: updating Mar 22 00:50:27.425: INFO: waiting for watch events with expected annotations Mar 22 00:50:27.425: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:27.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-2730" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":330,"completed":153,"skipped":2565,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:27.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:50:27.556: INFO: Waiting up to 5m0s for pod "downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d" in namespace "projected-773" to be "Succeeded or Failed" Mar 22 00:50:27.606: INFO: Pod "downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.446179ms Mar 22 00:50:29.611: INFO: Pod "downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055047322s Mar 22 00:50:31.672: INFO: Pod "downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116259731s Mar 22 00:50:33.677: INFO: Pod "downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121256337s STEP: Saw pod success Mar 22 00:50:33.677: INFO: Pod "downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d" satisfied condition "Succeeded or Failed" Mar 22 00:50:33.681: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d container client-container: STEP: delete the pod Mar 22 00:50:33.744: INFO: Waiting for pod downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d to disappear Mar 22 00:50:33.773: INFO: Pod downwardapi-volume-182a6df5-9512-47bd-a51d-38dacc133d8d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:50:33.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-773" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":154,"skipped":2575,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSS ------------------------------ [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:50:33.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:51:14.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7494" for this suite. • [SLOW TEST:40.638 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":330,"completed":155,"skipped":2578,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:51:14.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 22 00:51:15.181: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 22 00:51:20.229: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:51:20.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-333" for this suite. • [SLOW TEST:6.431 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":330,"completed":156,"skipped":2592,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:51:20.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-lxzf STEP: Creating a pod to test atomic-volume-subpath Mar 22 00:51:21.417: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lxzf" in namespace "subpath-171" to be "Succeeded or Failed" Mar 22 00:51:21.616: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Pending", Reason="", readiness=false. Elapsed: 199.276636ms Mar 22 00:51:23.620: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203355434s Mar 22 00:51:25.655: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238151115s Mar 22 00:51:27.805: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 6.388839588s Mar 22 00:51:29.810: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 8.393187535s Mar 22 00:51:31.814: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 10.397910896s Mar 22 00:51:33.819: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 12.402440521s Mar 22 00:51:36.063: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 14.646525058s Mar 22 00:51:38.067: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 16.650563353s Mar 22 00:51:40.072: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 18.655402214s Mar 22 00:51:42.077: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 20.660165817s Mar 22 00:51:44.081: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 22.664824717s Mar 22 00:51:46.085: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Running", Reason="", readiness=true. Elapsed: 24.668547873s Mar 22 00:51:48.089: INFO: Pod "pod-subpath-test-configmap-lxzf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.672301042s STEP: Saw pod success Mar 22 00:51:48.089: INFO: Pod "pod-subpath-test-configmap-lxzf" satisfied condition "Succeeded or Failed" Mar 22 00:51:48.092: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-lxzf container test-container-subpath-configmap-lxzf: STEP: delete the pod Mar 22 00:51:48.151: INFO: Waiting for pod pod-subpath-test-configmap-lxzf to disappear Mar 22 00:51:48.164: INFO: Pod pod-subpath-test-configmap-lxzf no longer exists STEP: Deleting pod pod-subpath-test-configmap-lxzf Mar 22 00:51:48.164: INFO: Deleting pod "pod-subpath-test-configmap-lxzf" in namespace "subpath-171" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:51:48.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-171" for this suite. • [SLOW TEST:27.322 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":330,"completed":157,"skipped":2649,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:51:48.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 22 00:51:48.424: INFO: Waiting up to 5m0s for pod "pod-20260644-0d40-46e4-9956-95b447e0a1a6" in namespace "emptydir-9207" to be "Succeeded or Failed" Mar 22 00:51:48.501: INFO: Pod "pod-20260644-0d40-46e4-9956-95b447e0a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 76.963363ms Mar 22 00:51:50.504: INFO: Pod "pod-20260644-0d40-46e4-9956-95b447e0a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079884688s Mar 22 00:51:52.572: INFO: Pod "pod-20260644-0d40-46e4-9956-95b447e0a1a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148229496s Mar 22 00:51:54.577: INFO: Pod "pod-20260644-0d40-46e4-9956-95b447e0a1a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15319266s STEP: Saw pod success Mar 22 00:51:54.577: INFO: Pod "pod-20260644-0d40-46e4-9956-95b447e0a1a6" satisfied condition "Succeeded or Failed" Mar 22 00:51:54.580: INFO: Trying to get logs from node latest-worker2 pod pod-20260644-0d40-46e4-9956-95b447e0a1a6 container test-container: STEP: delete the pod Mar 22 00:51:54.640: INFO: Waiting for pod pod-20260644-0d40-46e4-9956-95b447e0a1a6 to disappear Mar 22 00:51:54.776: INFO: Pod pod-20260644-0d40-46e4-9956-95b447e0a1a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:51:54.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9207" for this suite. • [SLOW TEST:6.610 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":158,"skipped":2661,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:51:54.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod Mar 22 00:51:55.402: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:51:57.406: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:51:59.536: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:52:01.406: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:52:03.407: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod Mar 22 00:52:03.428: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:52:05.432: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:52:07.432: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:52:09.432: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 22 00:52:09.434: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:09.434: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:09.548: INFO: Exec stderr: "" Mar 22 00:52:09.548: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:09.548: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:09.669: INFO: Exec stderr: "" Mar 22 00:52:09.669: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:09.669: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:09.753: INFO: Exec stderr: "" Mar 22 00:52:09.753: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:09.753: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:09.844: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 22 00:52:09.844: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:09.844: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:09.937: INFO: Exec stderr: "" Mar 22 00:52:09.937: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:09.937: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:10.042: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 22 00:52:10.042: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:10.042: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:10.158: INFO: Exec stderr: "" Mar 22 00:52:10.158: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:10.158: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:10.251: INFO: Exec stderr: "" Mar 22 00:52:10.251: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:10.251: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:10.357: INFO: Exec stderr: "" Mar 22 00:52:10.357: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2074 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:52:10.357: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:52:10.474: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:52:10.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2074" for this suite. • [SLOW TEST:15.695 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":159,"skipped":2690,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:52:10.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:52:10.614: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 22 00:52:15.645: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 22 00:52:15.645: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 22 00:52:15.729: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6431 5434ed34-9434-475d-ab26-1ecd9d63bc3d 7004656 1 2021-03-22 00:52:15 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-03-22 00:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00036f9c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 22 00:52:15.778: INFO: New ReplicaSet "test-cleanup-deployment-5c896c44c9" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5c896c44c9 deployment-6431 2d29291d-6644-46c9-ac72-6fcdd4d0dd6b 7004659 1 2021-03-22 00:52:15 +0000 UTC map[name:cleanup-pod pod-template-hash:5c896c44c9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 5434ed34-9434-475d-ab26-1ecd9d63bc3d 0xc00277a3d7 0xc00277a3d8}] [] [{kube-controller-manager Update apps/v1 2021-03-22 00:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5434ed34-9434-475d-ab26-1ecd9d63bc3d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5c896c44c9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5c896c44c9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.28 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00277a468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:52:15.778: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 22 00:52:15.778: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6431 0d3d1e43-deee-4f06-9f65-74156816d395 7004658 1 2021-03-22 00:52:10 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 5434ed34-9434-475d-ab26-1ecd9d63bc3d 0xc00277a2c7 0xc00277a2c8}] [] [{e2e.test Update apps/v1 2021-03-22 00:52:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-22 00:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"5434ed34-9434-475d-ab26-1ecd9d63bc3d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00277a368 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 22 00:52:15.872: INFO: Pod "test-cleanup-controller-6nr8x" is available: &Pod{ObjectMeta:{test-cleanup-controller-6nr8x test-cleanup-controller- deployment-6431 4f2f0490-b57a-4739-aa37-52999e83461b 7004645 0 2021-03-22 00:52:10 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0d3d1e43-deee-4f06-9f65-74156816d395 0xc0018e3ca7 0xc0018e3ca8}] [] [{kube-controller-manager Update v1 2021-03-22 00:52:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d3d1e43-deee-4f06-9f65-74156816d395\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:52:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.164\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pwvzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pwvzb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pwvzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:52:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:52:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:52:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.164,StartTime:2021-03-22 00:52:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:52:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://e074421e48f6b2be76b61bf92350f1c98925034bce153425ef5341442ad1b8b1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 00:52:15.873: INFO: Pod "test-cleanup-deployment-5c896c44c9-vlj49" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5c896c44c9-vlj49 test-cleanup-deployment-5c896c44c9- deployment-6431 2b519e5c-db5c-4f75-976b-5bf27f099271 7004666 0 2021-03-22 00:52:15 +0000 UTC map[name:cleanup-pod pod-template-hash:5c896c44c9] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5c896c44c9 2d29291d-6644-46c9-ac72-6fcdd4d0dd6b 0xc0018e3e67 0xc0018e3e68}] [] [{kube-controller-manager Update v1 2021-03-22 00:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d29291d-6644-46c9-ac72-6fcdd4d0dd6b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pwvzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pwvzb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pwvzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:52:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:52:15.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6431" for this suite. • [SLOW TEST:5.440 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":330,"completed":160,"skipped":2694,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:52:15.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 22 00:52:24.669: INFO: 4 pods remaining Mar 22 00:52:24.669: INFO: 0 pods has nil DeletionTimestamp Mar 22 00:52:24.669: INFO: Mar 22 00:52:25.971: INFO: 0 pods remaining Mar 22 00:52:25.971: INFO: 0 pods has nil DeletionTimestamp Mar 22 00:52:25.971: INFO: Mar 22 00:52:26.512: INFO: 0 pods remaining Mar 22 00:52:26.512: INFO: 0 pods has nil DeletionTimestamp Mar 22 00:52:26.512: INFO: STEP: Gathering metrics W0322 00:52:29.236973 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:53:31.343: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:53:31.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6005" for this suite. • [SLOW TEST:75.430 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":330,"completed":161,"skipped":2709,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:53:31.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9157 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 00:53:31.494: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 22 00:53:31.683: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:53:33.689: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:53:35.711: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:53:37.699: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:53:39.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:53:41.687: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:53:43.688: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 00:53:45.688: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 22 00:53:45.694: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 22 00:53:49.724: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 22 00:53:49.724: INFO: Breadth first check of 10.244.2.172 on host 172.18.0.9... Mar 22 00:53:49.726: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.188:9080/dial?request=hostname&protocol=http&host=10.244.2.172&port=8080&tries=1'] Namespace:pod-network-test-9157 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:53:49.726: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:53:49.830: INFO: Waiting for responses: map[] Mar 22 00:53:49.830: INFO: reached 10.244.2.172 after 0/1 tries Mar 22 00:53:49.830: INFO: Breadth first check of 10.244.1.187 on host 172.18.0.13... Mar 22 00:53:49.834: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.188:9080/dial?request=hostname&protocol=http&host=10.244.1.187&port=8080&tries=1'] Namespace:pod-network-test-9157 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:53:49.834: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:53:49.942: INFO: Waiting for responses: map[] Mar 22 00:53:49.942: INFO: reached 10.244.1.187 after 0/1 tries Mar 22 00:53:49.942: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:53:49.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9157" for this suite. • [SLOW TEST:18.598 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":330,"completed":162,"skipped":2744,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:53:49.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 22 00:53:54.216: INFO: &Pod{ObjectMeta:{send-events-7c0d0a96-0136-4803-9bb7-19b6629036b2 events-7419 f136a5e2-b538-4243-9380-6cb56076d72a 7005260 0 2021-03-22 00:53:50 +0000 UTC map[name:foo time:146791486] map[] [] [] [{e2e.test Update v1 2021-03-22 00:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 00:53:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vbk8t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vbk8t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vbk8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:53:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:53:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:53:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 00:53:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.189,StartTime:2021-03-22 00:53:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 00:53:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706,ContainerID:containerd://f36d20031dff48b0b8813043161ba0c8d017d606597e1dcbc9a59fd616813d16,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 22 00:53:56.268: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 22 00:53:58.273: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:53:58.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7419" for this suite. • [SLOW TEST:8.396 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":330,"completed":163,"skipped":2776,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:53:58.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-0b2eeee8-fe4d-415c-93a3-d2d8c2c97e79 STEP: Creating secret with name s-test-opt-upd-6d37d0ea-3180-4cf5-82db-f1ede41cc3c6 STEP: Creating the pod Mar 22 00:53:58.553: INFO: The status of Pod pod-secrets-ea455dea-2fed-4737-a2bc-f2029e6a3600 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:54:00.768: INFO: The status of Pod pod-secrets-ea455dea-2fed-4737-a2bc-f2029e6a3600 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:54:02.610: INFO: The status of Pod pod-secrets-ea455dea-2fed-4737-a2bc-f2029e6a3600 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:54:04.557: INFO: The status of Pod pod-secrets-ea455dea-2fed-4737-a2bc-f2029e6a3600 is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:54:06.589: INFO: The status of Pod pod-secrets-ea455dea-2fed-4737-a2bc-f2029e6a3600 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-0b2eeee8-fe4d-415c-93a3-d2d8c2c97e79 STEP: Updating secret s-test-opt-upd-6d37d0ea-3180-4cf5-82db-f1ede41cc3c6 STEP: Creating secret with name s-test-opt-create-2b674d46-9d94-490b-b6bb-292b844c9fd8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:55:23.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5697" for this suite. • [SLOW TEST:85.144 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":164,"skipped":2806,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:55:23.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 00:55:23.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570" in namespace "downward-api-2416" to be "Succeeded or Failed" Mar 22 00:55:23.609: INFO: Pod "downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570": Phase="Pending", Reason="", readiness=false. Elapsed: 15.802186ms Mar 22 00:55:25.821: INFO: Pod "downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228341703s Mar 22 00:55:27.827: INFO: Pod "downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234384124s STEP: Saw pod success Mar 22 00:55:27.828: INFO: Pod "downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570" satisfied condition "Succeeded or Failed" Mar 22 00:55:27.862: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570 container client-container: STEP: delete the pod Mar 22 00:55:27.908: INFO: Waiting for pod downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570 to disappear Mar 22 00:55:27.937: INFO: Pod downwardapi-volume-8e016de5-6c94-4945-a302-d3e9a5865570 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:55:27.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2416" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":330,"completed":165,"skipped":2838,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:55:28.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 00:55:29.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 00:55:31.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 00:55:33.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751971329, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 00:55:36.759: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:55:36.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7477-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:55:38.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6213" for this suite. STEP: Destroying namespace "webhook-6213-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":330,"completed":166,"skipped":2886,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:55:38.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:55:42.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":330,"completed":167,"skipped":2899,"failed":10,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:55:42.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-5199 STEP: creating service affinity-clusterip-transition in namespace services-5199 STEP: creating replication controller affinity-clusterip-transition in namespace services-5199 I0322 00:55:42.760269 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5199, replica count: 3 I0322 00:55:45.811667 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 00:55:48.812242 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 00:55:48.818: INFO: Creating new exec pod E0322 00:55:52.843774 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:55:54.326255 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:55:56.823905 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:56:03.060557 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:56:10.764560 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:56:24.757805 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:57:06.520333 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 00:57:40.783729 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 00:57:52.842: FAIL: Unexpected error: <*errors.errorString | 0xc0010260d0>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000f59760, 0x73e8b88, 0xc00242b8c0, 0xc0006d4f00, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2518 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1814 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 00:57:52.843: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5199, will wait for the garbage collector to delete the pods Mar 22 00:57:52.964: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.626447ms Mar 22 00:57:53.664: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 700.454787ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5199". STEP: Found 23 events. Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:42 +0000 UTC - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-7dmvv Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:42 +0000 UTC - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-jm2xn Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:42 +0000 UTC - event for affinity-clusterip-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-transition-zrln9 Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:42 +0000 UTC - event for affinity-clusterip-transition-7dmvv: {default-scheduler } Scheduled: Successfully assigned services-5199/affinity-clusterip-transition-7dmvv to latest-worker Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:42 +0000 UTC - event for affinity-clusterip-transition-jm2xn: {default-scheduler } Scheduled: Successfully assigned services-5199/affinity-clusterip-transition-jm2xn to latest-worker2 Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:42 +0000 UTC - event for affinity-clusterip-transition-zrln9: {default-scheduler } Scheduled: Successfully assigned services-5199/affinity-clusterip-transition-zrln9 to latest-worker2 Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:44 +0000 UTC - event for affinity-clusterip-transition-7dmvv: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:44 +0000 UTC - event for affinity-clusterip-transition-zrln9: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:45 +0000 UTC - event for affinity-clusterip-transition-jm2xn: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:46 +0000 UTC - event for affinity-clusterip-transition-7dmvv: {kubelet latest-worker} Created: Created container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:46 +0000 UTC - event for affinity-clusterip-transition-zrln9: {kubelet latest-worker2} Created: Created container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:46 +0000 UTC - event for affinity-clusterip-transition-zrln9: {kubelet latest-worker2} Started: Started container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:47 +0000 UTC - event for affinity-clusterip-transition-7dmvv: {kubelet latest-worker} Started: Started container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:47 +0000 UTC - event for affinity-clusterip-transition-jm2xn: {kubelet latest-worker2} Started: Started container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:47 +0000 UTC - event for affinity-clusterip-transition-jm2xn: {kubelet latest-worker2} Created: Created container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:48 +0000 UTC - event for execpod-affinityk8k98: {default-scheduler } Scheduled: Successfully assigned services-5199/execpod-affinityk8k98 to latest-worker Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:50 +0000 UTC - event for execpod-affinityk8k98: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:51 +0000 UTC - event for execpod-affinityk8k98: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 00:58:45.255: INFO: At 2021-03-22 00:55:51 +0000 UTC - event for execpod-affinityk8k98: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 00:58:45.255: INFO: At 2021-03-22 00:57:52 +0000 UTC - event for execpod-affinityk8k98: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 22 00:58:45.255: INFO: At 2021-03-22 00:57:53 +0000 UTC - event for affinity-clusterip-transition-7dmvv: {kubelet latest-worker} Killing: Stopping container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:57:53 +0000 UTC - event for affinity-clusterip-transition-jm2xn: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-transition Mar 22 00:58:45.255: INFO: At 2021-03-22 00:57:53 +0000 UTC - event for affinity-clusterip-transition-zrln9: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-transition Mar 22 00:58:45.258: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 00:58:45.258: INFO: Mar 22 00:58:45.262: INFO: Logging node info for node latest-control-plane Mar 22 00:58:45.265: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7005520 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:54:41 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:54:41 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:54:41 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:54:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:58:45.266: INFO: Logging kubelet events for node latest-control-plane Mar 22 00:58:45.269: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 00:58:45.286: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container etcd ready: true, restart count 0 Mar 22 00:58:45.286: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:58:45.286: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container coredns ready: true, restart count 0 Mar 22 00:58:45.286: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 00:58:45.286: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 00:58:45.286: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 00:58:45.286: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 00:58:45.286: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:58:45.286: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.286: INFO: Container coredns ready: true, restart count 0 W0322 00:58:45.291576 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:58:45.391: INFO: Latency metrics for node latest-control-plane Mar 22 00:58:45.391: INFO: Logging node info for node latest-worker Mar 22 00:58:45.395: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7005567 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:55:01 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:55:01 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:55:01 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:55:01 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:58:45.395: INFO: Logging kubelet events for node latest-worker Mar 22 00:58:45.400: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 00:58:45.420: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.420: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:58:45.420: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.420: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:58:45.420: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.420: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 00:58:45.420: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.420: INFO: Container chaos-mesh ready: true, restart count 0 W0322 00:58:45.426828 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:58:45.647: INFO: Latency metrics for node latest-worker Mar 22 00:58:45.647: INFO: Logging node info for node latest-worker2 Mar 22 00:58:45.651: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7005414 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 00:54:11 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 00:54:11 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 00:54:11 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 00:54:11 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 00:58:45.652: INFO: Logging kubelet events for node latest-worker2 Mar 22 00:58:45.656: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 00:58:45.674: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.674: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 00:58:45.674: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.674: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 00:58:45.674: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 00:58:45.674: INFO: Container chaos-daemon ready: true, restart count 0 W0322 00:58:45.679809 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 00:58:45.908: INFO: Latency metrics for node latest-worker2 Mar 22 00:58:45.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5199" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [183.509 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:57:52.842: Unexpected error: <*errors.errorString | 0xc0010260d0>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-transition within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":330,"completed":167,"skipped":2910,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:58:45.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 00:58:46.054: INFO: Create a RollingUpdate DaemonSet Mar 22 00:58:46.058: INFO: Check that daemon pods launch on every node of the cluster Mar 22 00:58:46.077: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:58:46.092: INFO: Number of nodes with available pods: 0 Mar 22 00:58:46.092: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:58:47.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:58:47.104: INFO: Number of nodes with available pods: 0 Mar 22 00:58:47.104: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:58:48.318: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:58:48.330: INFO: Number of nodes with available pods: 0 Mar 22 00:58:48.330: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:58:49.199: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:58:49.203: INFO: Number of nodes with available pods: 0 Mar 22 00:58:49.203: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:58:50.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:58:50.101: INFO: Number of nodes with available pods: 0 Mar 22 00:58:50.101: INFO: Node latest-worker is running more than one daemon pod Mar 22 00:58:51.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:58:51.102: INFO: Number of nodes with available pods: 2 Mar 22 00:58:51.102: INFO: Number of running nodes: 2, number of available pods: 2 Mar 22 00:58:51.102: INFO: Update the DaemonSet to trigger a rollout Mar 22 00:58:51.114: INFO: Updating DaemonSet daemon-set Mar 22 00:59:45.188: INFO: Roll back the DaemonSet before rollout is complete Mar 22 00:59:45.195: INFO: Updating DaemonSet daemon-set Mar 22 00:59:45.195: INFO: Make sure DaemonSet rollback is complete Mar 22 00:59:45.217: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:45.217: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:45.230: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:46.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:46.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:46.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:47.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:47.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:47.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:48.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:48.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:48.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:49.311: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:49.311: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:49.316: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:50.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:50.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:50.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:51.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:51.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:51.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:52.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:52.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:52.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:53.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:53.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:53.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:54.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:54.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:54.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:55.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:55.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:55.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:56.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:56.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:56.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:57.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:57.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:57.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:58.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:58.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:58.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 00:59:59.237: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 00:59:59.237: INFO: Pod daemon-set-nfvpt is not available Mar 22 00:59:59.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:00.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:00.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:00.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:01.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:01.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:01.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:02.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:02.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:02.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:03.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:03.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:03.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:04.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:04.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:04.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:05.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:05.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:05.243: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:06.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:06.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:06.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:07.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:07.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:07.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:08.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:08.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:08.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:09.241: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:09.241: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:09.244: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:10.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:10.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:10.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:11.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:11.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:11.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:12.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:12.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:12.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:13.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:13.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:13.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:14.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:14.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:14.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:15.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:15.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:15.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:16.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:16.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:16.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:17.251: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:17.251: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:17.256: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:18.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:18.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:18.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:19.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:19.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:19.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:20.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:20.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:20.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:21.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:21.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:21.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:22.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:22.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:22.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:23.234: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:23.234: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:23.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:24.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:24.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:24.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:25.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:25.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:25.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:26.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:26.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:26.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:27.242: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:27.242: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:27.246: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:28.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:28.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:28.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:29.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:29.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:29.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:30.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:30.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:30.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:31.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:31.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:31.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:32.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:32.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:32.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:33.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:33.237: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:33.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:34.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:34.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:34.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:35.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:35.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:35.245: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:36.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:36.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:36.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:37.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:37.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:37.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:38.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:38.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:38.242: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:39.245: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:39.245: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:39.255: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:40.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:40.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:40.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:41.242: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:41.242: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:41.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:42.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:42.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:42.240: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:43.235: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:43.235: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:43.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:44.236: INFO: Wrong image for pod: daemon-set-nfvpt. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Mar 22 01:00:44.236: INFO: Pod daemon-set-nfvpt is not available Mar 22 01:00:44.241: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:00:45.242: INFO: Pod daemon-set-cjqg4 is not available Mar 22 01:00:45.247: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8318, will wait for the garbage collector to delete the pods Mar 22 01:00:45.314: INFO: Deleting DaemonSet.extensions daemon-set took: 7.187607ms Mar 22 01:00:45.914: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.534048ms Mar 22 01:01:45.020: INFO: Number of nodes with available pods: 0 Mar 22 01:01:45.020: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 01:01:45.023: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7006688"},"items":null} Mar 22 01:01:45.026: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7006688"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:01:45.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8318" for this suite. • [SLOW TEST:179.154 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":330,"completed":168,"skipped":2916,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:01:45.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:02:13.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1740" for this suite. • [SLOW TEST:28.209 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":330,"completed":169,"skipped":2923,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:02:13.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:02:17.483: INFO: Deleting pod "var-expansion-52c7b6d8-c29b-46e4-95d0-30359ffd5678" in namespace "var-expansion-8113" Mar 22 01:02:17.489: INFO: Wait up to 5m0s for pod "var-expansion-52c7b6d8-c29b-46e4-95d0-30359ffd5678" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:02:45.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8113" for this suite. • [SLOW TEST:32.287 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":330,"completed":170,"skipped":2942,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:02:45.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:02:45.697: INFO: The status of Pod busybox-scheduling-d240759e-f084-4853-ae95-0f910c2d7a32 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:02:47.834: INFO: The status of Pod busybox-scheduling-d240759e-f084-4853-ae95-0f910c2d7a32 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:02:49.702: INFO: The status of Pod busybox-scheduling-d240759e-f084-4853-ae95-0f910c2d7a32 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:02:49.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8052" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":330,"completed":171,"skipped":2965,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:02:49.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:02:49.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1007" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":330,"completed":172,"skipped":2968,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:02:49.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Mar 22 01:02:50.053: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:03:08.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5749" for this suite. • [SLOW TEST:18.342 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":330,"completed":173,"skipped":2986,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:03:08.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-8035 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-8035 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8035 Mar 22 01:03:08.418: INFO: Found 0 stateful pods, waiting for 1 Mar 22 01:03:18.424: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 22 01:03:18.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 22 01:03:22.160: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 22 01:03:22.160: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 22 01:03:22.160: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 22 01:03:22.212: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 22 01:03:32.219: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 22 01:03:32.219: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 01:03:32.266: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:03:32.266: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:03:32.266: INFO: Mar 22 01:03:32.266: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 22 01:03:33.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.96301914s Mar 22 01:03:34.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.958816297s Mar 22 01:03:35.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.664263407s Mar 22 01:03:36.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.658789691s Mar 22 01:03:37.581: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.653803726s Mar 22 01:03:38.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.648316984s Mar 22 01:03:39.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.642623499s Mar 22 01:03:40.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.636415406s Mar 22 01:03:41.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 632.357947ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8035 Mar 22 01:03:42.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:03:42.849: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Mar 22 01:03:42.849: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 22 01:03:42.849: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 22 01:03:42.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:03:43.078: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Mar 22 01:03:43.078: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 22 01:03:43.078: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 22 01:03:43.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:03:43.292: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Mar 22 01:03:43.292: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 22 01:03:43.292: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 22 01:03:43.296: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 22 01:03:53.303: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 22 01:03:53.303: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 22 01:03:53.303: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 22 01:03:53.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 22 01:03:53.532: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 22 01:03:53.532: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 22 01:03:53.532: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 22 01:03:53.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 22 01:03:53.771: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 22 01:03:53.772: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 22 01:03:53.772: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 22 01:03:53.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 22 01:03:54.060: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Mar 22 01:03:54.060: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 22 01:03:54.060: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 22 01:03:54.060: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 01:03:54.063: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 22 01:04:04.074: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 22 01:04:04.074: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 22 01:04:04.074: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 22 01:04:04.114: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:04.114: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:04.114: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:04.114: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:04.114: INFO: Mar 22 01:04:04.114: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:05.299: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:05.299: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:05.299: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:05.299: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:05.299: INFO: Mar 22 01:04:05.299: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:06.304: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:06.304: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:06.304: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:06.304: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:06.304: INFO: Mar 22 01:04:06.304: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:07.309: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:07.309: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:07.309: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:07.309: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:07.309: INFO: Mar 22 01:04:07.309: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:08.317: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:08.317: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:08.317: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:08.317: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:08.317: INFO: Mar 22 01:04:08.317: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:09.327: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:09.327: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:09.327: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:09.327: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:09.327: INFO: Mar 22 01:04:09.327: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:10.333: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:10.333: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:10.333: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:10.333: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:10.333: INFO: Mar 22 01:04:10.333: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:11.339: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:11.339: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:11.340: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:11.340: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:11.340: INFO: Mar 22 01:04:11.340: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:12.345: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:12.345: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:12.345: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:12.345: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:12.345: INFO: Mar 22 01:04:12.345: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 22 01:04:13.349: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:04:13.350: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:08 +0000 UTC }] Mar 22 01:04:13.350: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:13.350: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:54 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:03:32 +0000 UTC }] Mar 22 01:04:13.350: INFO: Mar 22 01:04:13.350: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8035 Mar 22 01:04:14.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:04:14.499: INFO: rc: 1 Mar 22 01:04:14.499: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 22 01:04:24.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:04:24.652: INFO: rc: 1 Mar 22 01:04:24.652: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 22 01:04:34.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:04:34.785: INFO: rc: 1 Mar 22 01:04:34.785: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 22 01:04:44.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:04:44.919: INFO: rc: 1 Mar 22 01:04:44.919: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 22 01:04:54.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:04:55.029: INFO: rc: 1 Mar 22 01:04:55.029: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:05:05.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:05:05.134: INFO: rc: 1 Mar 22 01:05:05.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:05:15.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:05:15.243: INFO: rc: 1 Mar 22 01:05:15.243: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:05:25.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:05:25.341: INFO: rc: 1 Mar 22 01:05:25.341: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:05:35.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:05:35.447: INFO: rc: 1 Mar 22 01:05:35.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:05:45.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:05:45.539: INFO: rc: 1 Mar 22 01:05:45.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:05:55.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:05:55.636: INFO: rc: 1 Mar 22 01:05:55.636: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:06:05.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:06:05.731: INFO: rc: 1 Mar 22 01:06:05.731: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:06:15.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:06:15.841: INFO: rc: 1 Mar 22 01:06:15.841: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:06:25.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:06:25.948: INFO: rc: 1 Mar 22 01:06:25.948: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:06:35.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:06:36.051: INFO: rc: 1 Mar 22 01:06:36.051: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:06:46.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:06:46.162: INFO: rc: 1 Mar 22 01:06:46.162: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:06:56.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:06:56.267: INFO: rc: 1 Mar 22 01:06:56.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:07:06.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:07:06.373: INFO: rc: 1 Mar 22 01:07:06.374: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:07:16.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:07:16.486: INFO: rc: 1 Mar 22 01:07:16.486: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:07:26.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:07:26.588: INFO: rc: 1 Mar 22 01:07:26.588: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:07:36.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:07:36.688: INFO: rc: 1 Mar 22 01:07:36.688: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:07:46.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:07:46.781: INFO: rc: 1 Mar 22 01:07:46.781: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:07:56.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:07:56.877: INFO: rc: 1 Mar 22 01:07:56.877: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:08:06.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:08:06.997: INFO: rc: 1 Mar 22 01:08:06.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:08:16.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:08:17.113: INFO: rc: 1 Mar 22 01:08:17.113: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:08:27.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:08:27.215: INFO: rc: 1 Mar 22 01:08:27.215: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:08:37.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:08:37.322: INFO: rc: 1 Mar 22 01:08:37.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:08:47.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:08:47.433: INFO: rc: 1 Mar 22 01:08:47.433: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:08:57.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:08:57.543: INFO: rc: 1 Mar 22 01:08:57.543: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:09:07.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:09:07.636: INFO: rc: 1 Mar 22 01:09:07.636: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 22 01:09:17.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=statefulset-8035 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 22 01:09:17.739: INFO: rc: 1 Mar 22 01:09:17.739: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Mar 22 01:09:17.739: INFO: Scaling statefulset ss to 0 Mar 22 01:09:17.748: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 22 01:09:17.750: INFO: Deleting all statefulset in ns statefulset-8035 Mar 22 01:09:17.752: INFO: Scaling statefulset ss to 0 Mar 22 01:09:17.760: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 01:09:17.762: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:09:17.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8035" for this suite. • [SLOW TEST:369.580 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":330,"completed":174,"skipped":2988,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:09:17.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-96863aa5-437d-4e38-a409-4e44c10581d9 STEP: Creating a pod to test consume secrets Mar 22 01:09:18.026: INFO: Waiting up to 5m0s for pod "pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13" in namespace "secrets-4" to be "Succeeded or Failed" Mar 22 01:09:18.078: INFO: Pod "pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13": Phase="Pending", Reason="", readiness=false. Elapsed: 51.301745ms Mar 22 01:09:20.083: INFO: Pod "pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056626661s Mar 22 01:09:22.088: INFO: Pod "pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13": Phase="Running", Reason="", readiness=true. Elapsed: 4.061765337s Mar 22 01:09:24.094: INFO: Pod "pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06751357s STEP: Saw pod success Mar 22 01:09:24.094: INFO: Pod "pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13" satisfied condition "Succeeded or Failed" Mar 22 01:09:24.097: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13 container secret-volume-test: STEP: delete the pod Mar 22 01:09:24.139: INFO: Waiting for pod pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13 to disappear Mar 22 01:09:24.143: INFO: Pod pod-secrets-48271673-e489-4f76-90a5-5d3ffdf1df13 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:09:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4" for this suite. STEP: Destroying namespace "secret-namespace-6836" for this suite. • [SLOW TEST:6.380 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":330,"completed":175,"skipped":3002,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:09:24.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-7720 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7720 STEP: Creating statefulset with conflicting port in namespace statefulset-7720 STEP: Waiting until pod test-pod will start running in namespace statefulset-7720 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7720 Mar 22 01:09:28.469: INFO: Observed stateful pod in namespace: statefulset-7720, name: ss-0, uid: 009d0b95-10f9-42cd-982d-d8c01103cf82, status phase: Pending. Waiting for statefulset controller to delete. Mar 22 01:09:28.769: INFO: Observed stateful pod in namespace: statefulset-7720, name: ss-0, uid: 009d0b95-10f9-42cd-982d-d8c01103cf82, status phase: Failed. Waiting for statefulset controller to delete. Mar 22 01:09:28.777: INFO: Observed stateful pod in namespace: statefulset-7720, name: ss-0, uid: 009d0b95-10f9-42cd-982d-d8c01103cf82, status phase: Failed. Waiting for statefulset controller to delete. Mar 22 01:09:28.834: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7720 STEP: Removing pod with conflicting port in namespace statefulset-7720 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7720 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 22 01:09:32.931: INFO: Deleting all statefulset in ns statefulset-7720 Mar 22 01:09:32.934: INFO: Scaling statefulset ss to 0 Mar 22 01:10:32.954: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 01:10:32.957: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:10:32.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7720" for this suite. • [SLOW TEST:68.818 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":330,"completed":176,"skipped":3009,"failed":11,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:10:32.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-5150 STEP: creating replication controller nodeport-test in namespace services-5150 I0322 01:10:33.231149 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5150, replica count: 2 I0322 01:10:36.283036 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:10:39.283311 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 01:10:39.283: INFO: Creating new exec pod E0322 01:10:43.306993 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:10:44.703396 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:10:47.746137 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:10:51.353408 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:11:01.402628 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:11:21.960304 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:11:57.444681 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 01:12:43.306: FAIL: Unexpected error: <*errors.errorString | 0xc004488460>: { s: "no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s", } no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-5150". STEP: Found 14 events. Mar 22 01:12:43.313: INFO: At 2021-03-22 01:10:33 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-ztjlq Mar 22 01:12:43.313: INFO: At 2021-03-22 01:10:33 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-hgq8c Mar 22 01:12:43.313: INFO: At 2021-03-22 01:10:33 +0000 UTC - event for nodeport-test-hgq8c: {default-scheduler } Scheduled: Successfully assigned services-5150/nodeport-test-hgq8c to latest-worker Mar 22 01:12:43.313: INFO: At 2021-03-22 01:10:33 +0000 UTC - event for nodeport-test-ztjlq: {default-scheduler } Scheduled: Successfully assigned services-5150/nodeport-test-ztjlq to latest-worker2 Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:34 +0000 UTC - event for nodeport-test-ztjlq: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:35 +0000 UTC - event for nodeport-test-hgq8c: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:36 +0000 UTC - event for nodeport-test-hgq8c: {kubelet latest-worker} Created: Created container nodeport-test Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:36 +0000 UTC - event for nodeport-test-ztjlq: {kubelet latest-worker2} Created: Created container nodeport-test Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:36 +0000 UTC - event for nodeport-test-ztjlq: {kubelet latest-worker2} Started: Started container nodeport-test Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:37 +0000 UTC - event for nodeport-test-hgq8c: {kubelet latest-worker} Started: Started container nodeport-test Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:39 +0000 UTC - event for execpodd4vth: {default-scheduler } Scheduled: Successfully assigned services-5150/execpodd4vth to latest-worker Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:41 +0000 UTC - event for execpodd4vth: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:42 +0000 UTC - event for execpodd4vth: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 01:12:43.314: INFO: At 2021-03-22 01:10:42 +0000 UTC - event for execpodd4vth: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 01:12:43.317: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:12:43.317: INFO: execpodd4vth latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:39 +0000 UTC }] Mar 22 01:12:43.317: INFO: nodeport-test-hgq8c latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:33 +0000 UTC }] Mar 22 01:12:43.317: INFO: nodeport-test-ztjlq latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:10:33 +0000 UTC }] Mar 22 01:12:43.317: INFO: Mar 22 01:12:43.323: INFO: Logging node info for node latest-control-plane Mar 22 01:12:43.326: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7008110 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:09:44 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:09:44 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:09:44 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:09:44 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:12:43.326: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:12:43.329: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:12:43.362: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:12:43.362: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 01:12:43.362: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:12:43.362: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:12:43.362: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container coredns ready: true, restart count 0 Mar 22 01:12:43.362: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 01:12:43.362: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container etcd ready: true, restart count 0 Mar 22 01:12:43.362: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:12:43.362: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.362: INFO: Container coredns ready: true, restart count 0 W0322 01:12:43.368184 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:12:43.466: INFO: Latency metrics for node latest-control-plane Mar 22 01:12:43.466: INFO: Logging node info for node latest-worker Mar 22 01:12:43.470: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7008149 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:10:04 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:10:04 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:10:04 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:10:04 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:12:43.470: INFO: Logging kubelet events for node latest-worker Mar 22 01:12:43.472: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:12:43.490: INFO: execpodd4vth started at 2021-03-22 01:10:39 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.490: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 01:12:43.490: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.490: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:12:43.490: INFO: nodeport-test-hgq8c started at 2021-03-22 01:10:33 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.490: INFO: Container nodeport-test ready: true, restart count 0 Mar 22 01:12:43.490: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.490: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:12:43.490: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.490: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:12:43.490: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.490: INFO: Container chaos-mesh ready: true, restart count 0 W0322 01:12:43.497618 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:12:43.746: INFO: Latency metrics for node latest-worker Mar 22 01:12:43.746: INFO: Logging node info for node latest-worker2 Mar 22 01:12:43.750: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7007854 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:09:13 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:09:13 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:09:13 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:09:13 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:12:43.751: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:12:43.754: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:12:43.769: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.769: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:12:43.769: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.769: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:12:43.769: INFO: nodeport-test-ztjlq started at 2021-03-22 01:10:33 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.769: INFO: Container nodeport-test ready: true, restart count 0 Mar 22 01:12:43.769: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:12:43.769: INFO: Container chaos-daemon ready: true, restart count 0 W0322 01:12:43.774645 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:12:44.007: INFO: Latency metrics for node latest-worker2 Mar 22 01:12:44.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5150" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [131.029 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:12:43.306: Unexpected error: <*errors.errorString | 0xc004488460>: { s: "no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s", } no subset of available IP address found for the endpoint nodeport-test within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":330,"completed":176,"skipped":3014,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:12:44.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:12:44.098: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 22 01:12:47.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 create -f -' Mar 22 01:12:53.596: INFO: stderr: "" Mar 22 01:12:53.597: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 22 01:12:53.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 delete e2e-test-crd-publish-openapi-2006-crds test-foo' Mar 22 01:12:53.699: INFO: stderr: "" Mar 22 01:12:53.699: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 22 01:12:53.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 apply -f -' Mar 22 01:12:54.051: INFO: stderr: "" Mar 22 01:12:54.051: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 22 01:12:54.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 delete e2e-test-crd-publish-openapi-2006-crds test-foo' Mar 22 01:12:54.157: INFO: stderr: "" Mar 22 01:12:54.157: INFO: stdout: "e2e-test-crd-publish-openapi-2006-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 22 01:12:54.157: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 create -f -' Mar 22 01:12:54.466: INFO: rc: 1 Mar 22 01:12:54.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 apply -f -' Mar 22 01:12:54.783: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 22 01:12:54.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 create -f -' Mar 22 01:12:55.092: INFO: rc: 1 Mar 22 01:12:55.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 --namespace=crd-publish-openapi-3450 apply -f -' Mar 22 01:12:55.449: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 22 01:12:55.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 explain e2e-test-crd-publish-openapi-2006-crds' Mar 22 01:12:55.744: INFO: stderr: "" Mar 22 01:12:55.745: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 22 01:12:55.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 explain e2e-test-crd-publish-openapi-2006-crds.metadata' Mar 22 01:12:56.018: INFO: stderr: "" Mar 22 01:12:56.018: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 22 01:12:56.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 explain e2e-test-crd-publish-openapi-2006-crds.spec' Mar 22 01:12:56.324: INFO: stderr: "" Mar 22 01:12:56.324: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 22 01:12:56.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 explain e2e-test-crd-publish-openapi-2006-crds.spec.bars' Mar 22 01:12:56.637: INFO: stderr: "" Mar 22 01:12:56.637: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2006-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 22 01:12:56.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3450 explain e2e-test-crd-publish-openapi-2006-crds.spec.bars2' Mar 22 01:12:56.934: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:13:00.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3450" for this suite. • [SLOW TEST:16.487 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":330,"completed":177,"skipped":3082,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:13:00.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container Mar 22 01:13:05.123: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8441 pod-service-account-026350c3-f67a-4f76-a996-bc458c5ccc66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 22 01:13:05.325: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8441 pod-service-account-026350c3-f67a-4f76-a996-bc458c5ccc66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 22 01:13:05.525: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8441 pod-service-account-026350c3-f67a-4f76-a996-bc458c5ccc66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:13:05.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8441" for this suite. • [SLOW TEST:5.230 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":330,"completed":178,"skipped":3097,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:13:05.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-6b308f09-6dec-4ef3-b291-68e81d41ad18 STEP: Creating a pod to test consume configMaps Mar 22 01:13:05.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7" in namespace "configmap-8139" to be "Succeeded or Failed" Mar 22 01:13:05.906: INFO: Pod "pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 51.92613ms Mar 22 01:13:07.911: INFO: Pod "pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057653884s Mar 22 01:13:09.916: INFO: Pod "pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7": Phase="Running", Reason="", readiness=true. Elapsed: 4.062856682s Mar 22 01:13:11.922: INFO: Pod "pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067896585s STEP: Saw pod success Mar 22 01:13:11.922: INFO: Pod "pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7" satisfied condition "Succeeded or Failed" Mar 22 01:13:11.925: INFO: Trying to get logs from node latest-worker pod pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7 container agnhost-container: STEP: delete the pod Mar 22 01:13:11.993: INFO: Waiting for pod pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7 to disappear Mar 22 01:13:11.996: INFO: Pod pod-configmaps-3a7a6080-9d16-48d0-adcd-e09c70fb5aa7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:13:11.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8139" for this suite. • [SLOW TEST:6.267 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":179,"skipped":3101,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:13:12.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-2629/secret-test-93a9aa87-334f-4d8c-a6b3-5e1a9d62e685 STEP: Creating a pod to test consume secrets Mar 22 01:13:12.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c" in namespace "secrets-2629" to be "Succeeded or Failed" Mar 22 01:13:12.178: INFO: Pod "pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.959694ms Mar 22 01:13:14.221: INFO: Pod "pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053056557s Mar 22 01:13:16.226: INFO: Pod "pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057924641s STEP: Saw pod success Mar 22 01:13:16.226: INFO: Pod "pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c" satisfied condition "Succeeded or Failed" Mar 22 01:13:16.229: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c container env-test: STEP: delete the pod Mar 22 01:13:16.329: INFO: Waiting for pod pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c to disappear Mar 22 01:13:16.344: INFO: Pod pod-configmaps-adb8b032-4239-497e-9ecd-dc40876a961c no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:13:16.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2629" for this suite. •{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":330,"completed":180,"skipped":3103,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:13:16.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e Mar 22 01:13:16.497: INFO: Pod name my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e: Found 0 pods out of 1 Mar 22 01:13:21.502: INFO: Pod name my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e: Found 1 pods out of 1 Mar 22 01:13:21.502: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e" are running Mar 22 01:13:21.505: INFO: Pod "my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e-xl6pz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:13:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:13:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:13:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:13:16 +0000 UTC Reason: Message:}]) Mar 22 01:13:21.506: INFO: Trying to dial the pod Mar 22 01:13:26.520: INFO: Controller my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e: Got expected result from replica 1 [my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e-xl6pz]: "my-hostname-basic-eb6b0903-d89b-4f79-a5ec-ac3b9a57cf3e-xl6pz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:13:26.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2399" for this suite. • [SLOW TEST:10.179 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":330,"completed":181,"skipped":3149,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:13:26.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-5f756fd5-2087-4bf4-99e4-90ad2e570dfa STEP: Creating configMap with name cm-test-opt-upd-d86ea93b-cca8-435c-aee5-a2725c43677c STEP: Creating the pod Mar 22 01:13:26.664: INFO: The status of Pod pod-projected-configmaps-d8fd8a17-3a29-4063-b822-4f6b6542830e is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:13:28.784: INFO: The status of Pod pod-projected-configmaps-d8fd8a17-3a29-4063-b822-4f6b6542830e is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:13:30.670: INFO: The status of Pod pod-projected-configmaps-d8fd8a17-3a29-4063-b822-4f6b6542830e is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:13:32.671: INFO: The status of Pod pod-projected-configmaps-d8fd8a17-3a29-4063-b822-4f6b6542830e is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:13:34.669: INFO: The status of Pod pod-projected-configmaps-d8fd8a17-3a29-4063-b822-4f6b6542830e is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-5f756fd5-2087-4bf4-99e4-90ad2e570dfa STEP: Updating configmap cm-test-opt-upd-d86ea93b-cca8-435c-aee5-a2725c43677c STEP: Creating configMap with name cm-test-opt-create-de6c33b1-0352-4b4d-9ae7-c467bec89344 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:14:57.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1782" for this suite. • [SLOW TEST:91.063 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":182,"skipped":3166,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:14:57.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-15629615-3b52-4956-8970-ffad30d9b70c STEP: Creating a pod to test consume configMaps Mar 22 01:14:57.722: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4" in namespace "configmap-7556" to be "Succeeded or Failed" Mar 22 01:14:57.754: INFO: Pod "pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.298219ms Mar 22 01:14:59.758: INFO: Pod "pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035921265s Mar 22 01:15:01.768: INFO: Pod "pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4": Phase="Running", Reason="", readiness=true. Elapsed: 4.046292352s Mar 22 01:15:03.774: INFO: Pod "pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051811162s STEP: Saw pod success Mar 22 01:15:03.774: INFO: Pod "pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4" satisfied condition "Succeeded or Failed" Mar 22 01:15:03.777: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4 container agnhost-container: STEP: delete the pod Mar 22 01:15:03.907: INFO: Waiting for pod pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4 to disappear Mar 22 01:15:04.106: INFO: Pod pod-configmaps-7ca30a8f-dc14-47a8-8f7d-f42deea774f4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:15:04.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7556" for this suite. • [SLOW TEST:6.541 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":330,"completed":183,"skipped":3176,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:15:04.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates Mar 22 01:15:04.228: INFO: created test-podtemplate-1 Mar 22 01:15:04.236: INFO: created test-podtemplate-2 Mar 22 01:15:04.290: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Mar 22 01:15:04.523: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Mar 22 01:15:04.582: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:15:04.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3822" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":330,"completed":184,"skipped":3176,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:15:04.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:15:05.309: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Mar 22 01:15:07.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 01:15:09.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972505, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:15:12.408: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:15:12.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9279" for this suite. STEP: Destroying namespace "webhook-9279-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.023 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":330,"completed":185,"skipped":3208,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:15:12.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0322 01:15:22.729286 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:16:24.747: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:16:24.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2150" for this suite. • [SLOW TEST:72.121 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":330,"completed":186,"skipped":3209,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:16:24.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Mar 22 01:16:30.901: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2519 PodName:var-expansion-92e7ee61-6534-4737-abbe-bab69be01304 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 01:16:30.901: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path Mar 22 01:16:31.038: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2519 PodName:var-expansion-92e7ee61-6534-4737-abbe-bab69be01304 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 01:16:31.038: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value Mar 22 01:16:31.659: INFO: Successfully updated pod "var-expansion-92e7ee61-6534-4737-abbe-bab69be01304" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Mar 22 01:16:31.763: INFO: Deleting pod "var-expansion-92e7ee61-6534-4737-abbe-bab69be01304" in namespace "var-expansion-2519" Mar 22 01:16:31.767: INFO: Wait up to 5m0s for pod "var-expansion-92e7ee61-6534-4737-abbe-bab69be01304" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:17:45.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2519" for this suite. • [SLOW TEST:81.074 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":330,"completed":187,"skipped":3213,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:17:45.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:17:45.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 create -f -' Mar 22 01:17:46.378: INFO: stderr: "" Mar 22 01:17:46.378: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Mar 22 01:17:46.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 create -f -' Mar 22 01:17:46.794: INFO: stderr: "" Mar 22 01:17:46.794: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 22 01:17:47.799: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 01:17:47.799: INFO: Found 0 / 1 Mar 22 01:17:48.798: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 01:17:48.799: INFO: Found 0 / 1 Mar 22 01:17:49.799: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 01:17:49.800: INFO: Found 0 / 1 Mar 22 01:17:50.799: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 01:17:50.799: INFO: Found 1 / 1 Mar 22 01:17:50.799: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 22 01:17:50.802: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 01:17:50.802: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 22 01:17:50.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 describe pod agnhost-primary-jh9l7' Mar 22 01:17:50.974: INFO: stderr: "" Mar 22 01:17:50.974: INFO: stdout: "Name: agnhost-primary-jh9l7\nNamespace: kubectl-9430\nPriority: 0\nNode: latest-worker2/172.18.0.13\nStart Time: Mon, 22 Mar 2021 01:17:46 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.208\nIPs:\n IP: 10.244.1.208\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://42328966f38dd6a976a233ee9f848457e708d2587d706b8689a68efb2461f750\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.28\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 22 Mar 2021 01:17:49 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-jh56p (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-jh56p:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-jh56p\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9430/agnhost-primary-jh9l7 to latest-worker2\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.28\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Mar 22 01:17:50.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 describe rc agnhost-primary' Mar 22 01:17:51.110: INFO: stderr: "" Mar 22 01:17:51.110: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9430\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.28\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-jh9l7\n" Mar 22 01:17:51.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 describe service agnhost-primary' Mar 22 01:17:51.235: INFO: stderr: "" Mar 22 01:17:51.236: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9430\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.44.128\nIPs: 10.96.44.128\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.208:6379\nSession Affinity: None\nEvents: \n" Mar 22 01:17:51.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 describe node latest-control-plane' Mar 22 01:17:51.382: INFO: stderr: "" Mar 22 01:17:51.382: INFO: stdout: "Name: latest-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 19 Feb 2021 10:11:38 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 22 Mar 2021 01:17:49 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 22 Mar 2021 01:14:44 +0000 Fri, 19 Feb 2021 10:11:38 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 22 Mar 2021 01:14:44 +0000 Fri, 19 Feb 2021 10:11:38 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 22 Mar 2021 01:14:44 +0000 Fri, 19 Feb 2021 10:11:38 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 22 Mar 2021 01:14:44 +0000 Fri, 19 Feb 2021 10:12:15 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.14\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 2277e49732264d9b915753a27b5b08cc\n System UUID: 3fcd47a6-9190-448f-a26a-9823c0424f23\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.21.0-alpha.0\n Kube-Proxy Version: v1.21.0-alpha.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/latest/latest-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-74ff55c5b-9rxsk 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 76m\n kube-system coredns-74ff55c5b-tqd5x 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 76m\n kube-system etcd-latest-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 30d\n kube-system kindnet-94zqp 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 30d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-proxy-6jdsd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 30d\n local-path-storage local-path-provisioner-8b46957d4-54gls 0 (0%) 0 (0%) 0 (0%) 0 (0%) 30d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (5%) 100m (0%)\n memory 290Mi (0%) 390Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 22 01:17:51.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-9430 describe namespace kubectl-9430' Mar 22 01:17:51.487: INFO: stderr: "" Mar 22 01:17:51.487: INFO: stdout: "Name: kubectl-9430\nLabels: e2e-framework=kubectl\n e2e-run=15868e10-8b7e-4bfb-9e34-eb41f461b339\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:17:51.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9430" for this suite. • [SLOW TEST:5.663 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":330,"completed":188,"skipped":3224,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:17:51.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:17:51.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8398" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":330,"completed":189,"skipped":3237,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:17:51.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 01:17:51.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156" in namespace "projected-6385" to be "Succeeded or Failed" Mar 22 01:17:51.827: INFO: Pod "downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156": Phase="Pending", Reason="", readiness=false. Elapsed: 15.852953ms Mar 22 01:17:53.832: INFO: Pod "downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020972647s Mar 22 01:17:55.837: INFO: Pod "downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02609189s STEP: Saw pod success Mar 22 01:17:55.837: INFO: Pod "downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156" satisfied condition "Succeeded or Failed" Mar 22 01:17:55.840: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156 container client-container: STEP: delete the pod Mar 22 01:17:55.891: INFO: Waiting for pod downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156 to disappear Mar 22 01:17:55.901: INFO: Pod downwardapi-volume-34162899-b8e3-4d69-be3e-0ed3761b7156 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:17:55.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6385" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":330,"completed":190,"skipped":3239,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:17:55.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-a65fad0f-dace-4117-bcd3-f1ab84f387b0 STEP: Creating a pod to test consume secrets Mar 22 01:17:56.023: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4" in namespace "projected-4941" to be "Succeeded or Failed" Mar 22 01:17:56.043: INFO: Pod "pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.628267ms Mar 22 01:17:58.165: INFO: Pod "pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142080603s Mar 22 01:18:00.170: INFO: Pod "pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4": Phase="Running", Reason="", readiness=true. Elapsed: 4.147706472s Mar 22 01:18:02.175: INFO: Pod "pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.152105537s STEP: Saw pod success Mar 22 01:18:02.175: INFO: Pod "pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4" satisfied condition "Succeeded or Failed" Mar 22 01:18:02.178: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4 container projected-secret-volume-test: STEP: delete the pod Mar 22 01:18:02.235: INFO: Waiting for pod pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4 to disappear Mar 22 01:18:02.279: INFO: Pod pod-projected-secrets-1e913cee-4958-4e48-97bc-8107ad681ee4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:02.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4941" for this suite. • [SLOW TEST:6.366 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":330,"completed":191,"skipped":3248,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:02.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:18:02.352: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Mar 22 01:18:02.416: INFO: The status of Pod pod-logs-websocket-671378cf-3a11-4587-b5f3-78174ca9fb0e is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:18:04.459: INFO: The status of Pod pod-logs-websocket-671378cf-3a11-4587-b5f3-78174ca9fb0e is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:18:06.421: INFO: The status of Pod pod-logs-websocket-671378cf-3a11-4587-b5f3-78174ca9fb0e is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:06.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7358" for this suite. •{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":330,"completed":192,"skipped":3274,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:06.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 22 01:18:07.015: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Mar 22 01:18:09.337: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972687, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972687, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972687, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972687, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:18:12.382: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:18:12.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:13.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8203" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.217 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":330,"completed":193,"skipped":3313,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:13.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:18:13.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8252 version' Mar 22 01:18:13.920: INFO: stderr: "" Mar 22 01:18:13.920: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-beta.1\", GitCommit:\"40a411a61af315f955f11ee97397beecf432ff4f\", GitTreeState:\"clean\", BuildDate:\"2021-03-09T09:23:56Z\", GoVersion:\"go1.16\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-alpha.0\", GitCommit:\"98bc258bf5516b6c60860e06845b899eab29825d\", GitTreeState:\"clean\", BuildDate:\"2021-01-09T21:29:39Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:13.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8252" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":330,"completed":194,"skipped":3322,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:13.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:18:15.322: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:18:17.334: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972695, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972695, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972695, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751972695, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:18:20.400: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:18:20.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1216-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:21.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2896" for this suite. STEP: Destroying namespace "webhook-2896-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.729 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":330,"completed":195,"skipped":3326,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:21.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:18:21.748: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 22 01:18:23.836: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7094" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":330,"completed":196,"skipped":3330,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:24.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:18:25.511: INFO: The status of Pod busybox-host-aliasesf7936cfc-2cf7-456a-a7f7-493a5579f0ac is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:18:27.603: INFO: The status of Pod busybox-host-aliasesf7936cfc-2cf7-456a-a7f7-493a5579f0ac is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:18:29.516: INFO: The status of Pod busybox-host-aliasesf7936cfc-2cf7-456a-a7f7-493a5579f0ac is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:18:29.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8524" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":197,"skipped":3368,"failed":12,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:18:29.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-1662 STEP: creating service affinity-clusterip in namespace services-1662 STEP: creating replication controller affinity-clusterip in namespace services-1662 I0322 01:18:29.682349 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1662, replica count: 3 I0322 01:18:32.733154 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:18:35.733232 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 01:18:35.784: INFO: Creating new exec pod E0322 01:18:39.831838 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:18:41.301480 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:18:42.976639 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:18:47.804413 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:18:55.516286 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:19:16.971319 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:19:58.068342 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:20:39.209333 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 01:20:39.830: FAIL: Unexpected error: <*errors.errorString | 0xc001084480>: { s: "no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000f59760, 0x73e8b88, 0xc004a35e40, 0xc000931b80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2522 k8s.io/kubernetes/test/e2e/network.glob..func24.22() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1782 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 01:20:39.831: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1662, will wait for the garbage collector to delete the pods Mar 22 01:20:39.987: INFO: Deleting ReplicationController affinity-clusterip took: 7.01987ms Mar 22 01:20:40.588: INFO: Terminating ReplicationController affinity-clusterip pods took: 600.842189ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-1662". STEP: Found 23 events. Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:29 +0000 UTC - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-plkjn Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:29 +0000 UTC - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-zcppn Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:29 +0000 UTC - event for affinity-clusterip: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-zvwnh Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:29 +0000 UTC - event for affinity-clusterip-plkjn: {default-scheduler } Scheduled: Successfully assigned services-1662/affinity-clusterip-plkjn to latest-worker2 Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:29 +0000 UTC - event for affinity-clusterip-zcppn: {default-scheduler } Scheduled: Successfully assigned services-1662/affinity-clusterip-zcppn to latest-worker Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:29 +0000 UTC - event for affinity-clusterip-zvwnh: {default-scheduler } Scheduled: Successfully assigned services-1662/affinity-clusterip-zvwnh to latest-worker2 Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:31 +0000 UTC - event for affinity-clusterip-zvwnh: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:32 +0000 UTC - event for affinity-clusterip-plkjn: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:32 +0000 UTC - event for affinity-clusterip-zcppn: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:34 +0000 UTC - event for affinity-clusterip-plkjn: {kubelet latest-worker2} Created: Created container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:34 +0000 UTC - event for affinity-clusterip-zcppn: {kubelet latest-worker} Created: Created container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:34 +0000 UTC - event for affinity-clusterip-zcppn: {kubelet latest-worker} Started: Started container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:34 +0000 UTC - event for affinity-clusterip-zvwnh: {kubelet latest-worker2} Created: Created container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:34 +0000 UTC - event for affinity-clusterip-zvwnh: {kubelet latest-worker2} Started: Started container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:35 +0000 UTC - event for affinity-clusterip-plkjn: {kubelet latest-worker2} Started: Started container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:35 +0000 UTC - event for execpod-affinitypkhdm: {default-scheduler } Scheduled: Successfully assigned services-1662/execpod-affinitypkhdm to latest-worker Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:37 +0000 UTC - event for execpod-affinitypkhdm: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:38 +0000 UTC - event for execpod-affinitypkhdm: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 01:21:45.162: INFO: At 2021-03-22 01:18:38 +0000 UTC - event for execpod-affinitypkhdm: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 01:21:45.162: INFO: At 2021-03-22 01:20:39 +0000 UTC - event for execpod-affinitypkhdm: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 22 01:21:45.162: INFO: At 2021-03-22 01:20:40 +0000 UTC - event for affinity-clusterip-plkjn: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:20:40 +0000 UTC - event for affinity-clusterip-zcppn: {kubelet latest-worker} Killing: Stopping container affinity-clusterip Mar 22 01:21:45.162: INFO: At 2021-03-22 01:20:40 +0000 UTC - event for affinity-clusterip-zvwnh: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip Mar 22 01:21:45.164: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:21:45.164: INFO: Mar 22 01:21:45.168: INFO: Logging node info for node latest-control-plane Mar 22 01:21:45.170: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7010226 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:21:45.171: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:21:45.173: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:21:45.195: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container coredns ready: true, restart count 0 Mar 22 01:21:45.195: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container etcd ready: true, restart count 0 Mar 22 01:21:45.195: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:21:45.195: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:21:45.195: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:21:45.195: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container coredns ready: true, restart count 0 Mar 22 01:21:45.195: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 01:21:45.195: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:21:45.195: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.195: INFO: Container kube-scheduler ready: true, restart count 0 W0322 01:21:45.201381 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:21:45.291: INFO: Latency metrics for node latest-control-plane Mar 22 01:21:45.291: INFO: Logging node info for node latest-worker Mar 22 01:21:45.295: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7010265 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:21:45.296: INFO: Logging kubelet events for node latest-worker Mar 22 01:21:45.299: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:21:45.318: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.318: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:21:45.318: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.318: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:21:45.318: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.318: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:21:45.318: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.318: INFO: Container chaos-mesh ready: true, restart count 0 W0322 01:21:45.325171 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:21:45.579: INFO: Latency metrics for node latest-worker Mar 22 01:21:45.579: INFO: Logging node info for node latest-worker2 Mar 22 01:21:45.595: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7010164 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:15 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:15 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:15 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:19:15 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:21:45.597: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:21:45.599: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:21:45.618: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.618: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:21:45.618: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.618: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:21:45.618: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:21:45.618: INFO: Container kindnet-cni ready: true, restart count 0 W0322 01:21:45.623675 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:21:45.847: INFO: Latency metrics for node latest-worker2 Mar 22 01:21:45.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1662" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [196.322 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:20:39.830: Unexpected error: <*errors.errorString | 0xc001084480>: { s: "no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":330,"completed":197,"skipped":3384,"failed":13,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:21:45.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6231 Mar 22 01:21:45.978: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:21:47.983: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:21:49.983: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 22 01:21:49.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6231 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 22 01:21:50.214: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 22 01:21:50.214: INFO: stdout: "iptables" Mar 22 01:21:50.214: INFO: proxyMode: iptables Mar 22 01:21:50.266: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 22 01:21:50.272: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-6231 STEP: creating replication controller affinity-clusterip-timeout in namespace services-6231 I0322 01:21:50.316762 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-6231, replica count: 3 I0322 01:21:53.368751 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:21:56.369863 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 01:21:56.377: INFO: Creating new exec pod E0322 01:22:00.406050 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:22:01.810842 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:22:03.520064 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:22:07.097795 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:22:18.450028 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:22:33.788348 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 01:23:19.188662 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 01:24:00.405: FAIL: Unexpected error: <*errors.errorString | 0xc004876010>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc000f59760, 0x73e8b88, 0xc002a46160, 0xc0007d4280) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.23() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1798 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 01:24:00.406: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-6231, will wait for the garbage collector to delete the pods Mar 22 01:24:00.640: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 107.142545ms Mar 22 01:24:01.341: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 701.249677ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6231". STEP: Found 29 events. Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:45 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-6231/kube-proxy-mode-detector to latest-worker Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:46 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:48 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:48 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-2xt7w Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-48ghk Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for affinity-clusterip-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-clusterip-timeout-82h48 Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for affinity-clusterip-timeout-2xt7w: {default-scheduler } Scheduled: Successfully assigned services-6231/affinity-clusterip-timeout-2xt7w to latest-worker2 Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for affinity-clusterip-timeout-48ghk: {default-scheduler } Scheduled: Successfully assigned services-6231/affinity-clusterip-timeout-48ghk to latest-worker Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for affinity-clusterip-timeout-82h48: {default-scheduler } Scheduled: Successfully assigned services-6231/affinity-clusterip-timeout-82h48 to latest-worker2 Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:50 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker} FailedMount: MountVolume.SetUp failed for volume "default-token-wm86n" : object "services-6231"/"default-token-wm86n" not registered Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:52 +0000 UTC - event for affinity-clusterip-timeout-48ghk: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:52 +0000 UTC - event for affinity-clusterip-timeout-82h48: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:24:45.143: INFO: At 2021-03-22 01:21:53 +0000 UTC - event for affinity-clusterip-timeout-2xt7w: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:54 +0000 UTC - event for affinity-clusterip-timeout-2xt7w: {kubelet latest-worker2} Created: Created container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:54 +0000 UTC - event for affinity-clusterip-timeout-48ghk: {kubelet latest-worker} Created: Created container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:54 +0000 UTC - event for affinity-clusterip-timeout-48ghk: {kubelet latest-worker} Started: Started container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:54 +0000 UTC - event for affinity-clusterip-timeout-82h48: {kubelet latest-worker2} Started: Started container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:54 +0000 UTC - event for affinity-clusterip-timeout-82h48: {kubelet latest-worker2} Created: Created container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:55 +0000 UTC - event for affinity-clusterip-timeout-2xt7w: {kubelet latest-worker2} Started: Started container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:56 +0000 UTC - event for execpod-affinityfc6kd: {default-scheduler } Scheduled: Successfully assigned services-6231/execpod-affinityfc6kd to latest-worker2 Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:57 +0000 UTC - event for execpod-affinityfc6kd: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:59 +0000 UTC - event for execpod-affinityfc6kd: {kubelet latest-worker2} Created: Created container agnhost-container Mar 22 01:24:45.144: INFO: At 2021-03-22 01:21:59 +0000 UTC - event for execpod-affinityfc6kd: {kubelet latest-worker2} Started: Started container agnhost-container Mar 22 01:24:45.144: INFO: At 2021-03-22 01:24:00 +0000 UTC - event for execpod-affinityfc6kd: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 22 01:24:45.144: INFO: At 2021-03-22 01:24:01 +0000 UTC - event for affinity-clusterip-timeout-2xt7w: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:24:01 +0000 UTC - event for affinity-clusterip-timeout-48ghk: {kubelet latest-worker} Killing: Stopping container affinity-clusterip-timeout Mar 22 01:24:45.144: INFO: At 2021-03-22 01:24:01 +0000 UTC - event for affinity-clusterip-timeout-82h48: {kubelet latest-worker2} Killing: Stopping container affinity-clusterip-timeout Mar 22 01:24:45.170: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:24:45.170: INFO: Mar 22 01:24:45.175: INFO: Logging node info for node latest-control-plane Mar 22 01:24:45.178: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7010226 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:19:45 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:24:45.178: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:24:45.181: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:24:45.207: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.207: INFO: Container etcd ready: true, restart count 0 Mar 22 01:24:45.207: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.207: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:24:45.207: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.207: INFO: Container coredns ready: true, restart count 0 Mar 22 01:24:45.207: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.207: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:24:45.207: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.207: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 01:24:45.207: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.207: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:24:45.208: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.208: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:24:45.208: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.208: INFO: Container coredns ready: true, restart count 0 Mar 22 01:24:45.208: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.208: INFO: Container local-path-provisioner ready: true, restart count 0 W0322 01:24:45.214190 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:24:45.297: INFO: Latency metrics for node latest-control-plane Mar 22 01:24:45.297: INFO: Logging node info for node latest-worker Mar 22 01:24:45.300: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7010265 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:24:45.300: INFO: Logging kubelet events for node latest-worker Mar 22 01:24:45.302: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:24:45.325: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.325: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:24:45.325: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.325: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:24:45.325: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.325: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:24:45.325: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.325: INFO: Container chaos-mesh ready: true, restart count 0 W0322 01:24:45.331490 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:24:45.546: INFO: Latency metrics for node latest-worker Mar 22 01:24:45.546: INFO: Logging node info for node latest-worker2 Mar 22 01:24:45.549: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7010894 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:24:45.550: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:24:45.552: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:24:45.570: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.570: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:24:45.570: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.570: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:24:45.570: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:45.570: INFO: Container kindnet-cni ready: true, restart count 0 W0322 01:24:45.581499 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:24:45.841: INFO: Latency metrics for node latest-worker2 Mar 22 01:24:45.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6231" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [179.993 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:24:00.405: Unexpected error: <*errors.errorString | 0xc004876010>: { s: "no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-clusterip-timeout within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":330,"completed":197,"skipped":3388,"failed":14,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:24:45.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 01:24:46.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f" in namespace "downward-api-667" to be "Succeeded or Failed" Mar 22 01:24:46.004: INFO: Pod "downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461811ms Mar 22 01:24:48.009: INFO: Pod "downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008721892s Mar 22 01:24:50.014: INFO: Pod "downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013008561s Mar 22 01:24:52.019: INFO: Pod "downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018689029s STEP: Saw pod success Mar 22 01:24:52.019: INFO: Pod "downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f" satisfied condition "Succeeded or Failed" Mar 22 01:24:52.022: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f container client-container: STEP: delete the pod Mar 22 01:24:52.071: INFO: Waiting for pod downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f to disappear Mar 22 01:24:52.105: INFO: Pod downwardapi-volume-991c69c0-020d-43a2-82a0-9bbee54c349f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:24:52.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-667" for this suite. • [SLOW TEST:6.264 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":198,"skipped":3399,"failed":14,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:24:52.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 01:24:56.418: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:24:56.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1925" for this suite. •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":330,"completed":199,"skipped":3445,"failed":14,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:24:56.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:46 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:24:58.820: FAIL: No EndpointSlice found for Service endpointslice-8019/example-empty-selector: the server could not find the requested resource Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslice-8019". STEP: Found 0 events. Mar 22 01:24:58.827: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:24:58.827: INFO: Mar 22 01:24:58.833: INFO: Logging node info for node latest-control-plane Mar 22 01:24:58.837: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7010970 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:24:58.844: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:24:58.851: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:24:58.858: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:24:58.858: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 01:24:58.858: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:24:58.858: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:24:58.858: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container coredns ready: true, restart count 0 Mar 22 01:24:58.858: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 01:24:58.858: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container etcd ready: true, restart count 0 Mar 22 01:24:58.858: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:24:58.858: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.858: INFO: Container coredns ready: true, restart count 0 W0322 01:24:58.862566 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:24:58.951: INFO: Latency metrics for node latest-control-plane Mar 22 01:24:58.951: INFO: Logging node info for node latest-worker Mar 22 01:24:58.955: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7010265 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:20:05 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:24:58.956: INFO: Logging kubelet events for node latest-worker Mar 22 01:24:58.959: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:24:58.983: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.983: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:24:58.983: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.983: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:24:58.983: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.983: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 01:24:58.983: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:58.983: INFO: Container chaos-daemon ready: true, restart count 0 W0322 01:24:58.990253 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:24:59.219: INFO: Latency metrics for node latest-worker Mar 22 01:24:59.219: INFO: Logging node info for node latest-worker2 Mar 22 01:24:59.223: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7010894 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:24:59.224: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:24:59.226: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:24:59.233: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:59.233: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:24:59.233: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:59.233: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:24:59.233: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:24:59.233: INFO: Container chaos-daemon ready: true, restart count 0 W0322 01:24:59.238998 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:24:59.462: INFO: Latency metrics for node latest-worker2 Mar 22 01:24:59.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-8019" for this suite. • Failure [2.845 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:24:58.820: No EndpointSlice found for Service endpointslice-8019/example-empty-selector: the server could not find the requested resource /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":330,"completed":199,"skipped":3450,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:24:59.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:25:06.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7427" for this suite. • [SLOW TEST:6.940 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":330,"completed":200,"skipped":3486,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:25:06.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 22 01:25:06.537: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7174 4cb410e4-ebad-46da-82a6-f63def02050c 7011155 0 2021-03-22 01:25:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-22 01:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:25:06.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7174 4cb410e4-ebad-46da-82a6-f63def02050c 7011156 0 2021-03-22 01:25:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-22 01:25:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:25:06.538: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7174 4cb410e4-ebad-46da-82a6-f63def02050c 7011157 0 2021-03-22 01:25:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-22 01:25:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 22 01:25:16.659: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7174 4cb410e4-ebad-46da-82a6-f63def02050c 7011197 0 2021-03-22 01:25:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-22 01:25:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:25:16.660: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7174 4cb410e4-ebad-46da-82a6-f63def02050c 7011198 0 2021-03-22 01:25:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-22 01:25:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:25:16.660: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7174 4cb410e4-ebad-46da-82a6-f63def02050c 7011199 0 2021-03-22 01:25:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-03-22 01:25:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:25:16.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7174" for this suite. • [SLOW TEST:10.300 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":330,"completed":201,"skipped":3520,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:25:16.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container Mar 22 01:25:22.854: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1155 PodName:pod-sharedvolume-1454d365-f796-47b8-95aa-5f1b68a45d1a ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 01:25:22.854: INFO: >>> kubeConfig: /root/.kube/config Mar 22 01:25:22.979: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:25:22.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1155" for this suite. • [SLOW TEST:6.279 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":330,"completed":202,"skipped":3546,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:25:22.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-xrs65 in namespace proxy-3601 I0322 01:25:23.172707 7 runners.go:190] Created replication controller with name: proxy-service-xrs65, namespace: proxy-3601, replica count: 1 I0322 01:25:24.224106 7 runners.go:190] proxy-service-xrs65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:25:25.224690 7 runners.go:190] proxy-service-xrs65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:25:26.224983 7 runners.go:190] proxy-service-xrs65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:25:27.225348 7 runners.go:190] proxy-service-xrs65 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 01:25:27.282: INFO: setup took 4.171225443s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 22 01:25:27.312: INFO: (0) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 28.920092ms) Mar 22 01:25:27.312: INFO: (0) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 29.14057ms) Mar 22 01:25:27.313: INFO: (0) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 29.931641ms) Mar 22 01:25:27.314: INFO: (0) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 30.897555ms) Mar 22 01:25:27.318: INFO: (0) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 35.275457ms) Mar 22 01:25:27.318: INFO: (0) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 35.521916ms) Mar 22 01:25:27.318: INFO: (0) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 35.777723ms) Mar 22 01:25:27.319: INFO: (0) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 36.373736ms) Mar 22 01:25:27.319: INFO: (0) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 36.269054ms) Mar 22 01:25:27.319: INFO: (0) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 36.480104ms) Mar 22 01:25:27.319: INFO: (0) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 36.577854ms) Mar 22 01:25:27.320: INFO: (0) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 36.987113ms) Mar 22 01:25:27.320: INFO: (0) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 36.962369ms) Mar 22 01:25:27.320: INFO: (0) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 37.351945ms) Mar 22 01:25:27.320: INFO: (0) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 37.388959ms) Mar 22 01:25:27.322: INFO: (0) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test<... (200; 5.239277ms) Mar 22 01:25:27.328: INFO: (1) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 5.902304ms) Mar 22 01:25:27.328: INFO: (1) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 6.072387ms) Mar 22 01:25:27.328: INFO: (1) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 6.643369ms) Mar 22 01:25:27.328: INFO: (1) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 6.631758ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 6.806894ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test (200; 6.989752ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 7.053665ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 6.996746ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 7.376079ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 7.472489ms) Mar 22 01:25:27.329: INFO: (1) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 7.418977ms) Mar 22 01:25:27.333: INFO: (2) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 4.175771ms) Mar 22 01:25:27.334: INFO: (2) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.550096ms) Mar 22 01:25:27.334: INFO: (2) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 4.608683ms) Mar 22 01:25:27.334: INFO: (2) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.625909ms) Mar 22 01:25:27.334: INFO: (2) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.676698ms) Mar 22 01:25:27.334: INFO: (2) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.777723ms) Mar 22 01:25:27.334: INFO: (2) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 5.927548ms) Mar 22 01:25:27.336: INFO: (2) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 6.647023ms) Mar 22 01:25:27.336: INFO: (2) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 6.868752ms) Mar 22 01:25:27.336: INFO: (2) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 7.138419ms) Mar 22 01:25:27.336: INFO: (2) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 7.187919ms) Mar 22 01:25:27.337: INFO: (2) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 7.298417ms) Mar 22 01:25:27.337: INFO: (2) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 7.275972ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 3.945287ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.11645ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 4.155595ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.676989ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.658454ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.773746ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.731599ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.778994ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.665417ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 4.712497ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 4.793582ms) Mar 22 01:25:27.341: INFO: (3) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 5.239501ms) Mar 22 01:25:27.342: INFO: (3) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 5.233454ms) Mar 22 01:25:27.344: INFO: (4) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 1.882489ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.529917ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 3.568631ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test (200; 4.184923ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.127325ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 4.224974ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.277558ms) Mar 22 01:25:27.346: INFO: (4) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.39592ms) Mar 22 01:25:27.347: INFO: (4) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 4.596044ms) Mar 22 01:25:27.347: INFO: (4) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.68571ms) Mar 22 01:25:27.347: INFO: (4) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 4.732372ms) Mar 22 01:25:27.347: INFO: (4) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.920161ms) Mar 22 01:25:27.347: INFO: (4) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 5.046582ms) Mar 22 01:25:27.347: INFO: (4) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 5.086854ms) Mar 22 01:25:27.370: INFO: (5) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 22.49101ms) Mar 22 01:25:27.370: INFO: (5) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 22.525243ms) Mar 22 01:25:27.370: INFO: (5) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 22.540393ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 25.232752ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 25.310899ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 25.314464ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 25.3077ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 25.418548ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 25.34251ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 25.404418ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 25.445097ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test (200; 25.391908ms) Mar 22 01:25:27.373: INFO: (5) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 25.445584ms) Mar 22 01:25:27.376: INFO: (6) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.086195ms) Mar 22 01:25:27.376: INFO: (6) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 3.263472ms) Mar 22 01:25:27.376: INFO: (6) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 3.372888ms) Mar 22 01:25:27.377: INFO: (6) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 3.770146ms) Mar 22 01:25:27.377: INFO: (6) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 6.174304ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 6.946434ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 7.054346ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 7.021201ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 6.982189ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 7.059472ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 7.048206ms) Mar 22 01:25:27.380: INFO: (6) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 6.969415ms) Mar 22 01:25:27.384: INFO: (7) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.432156ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.454487ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.709543ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 4.958287ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.978344ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 4.898176ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.929712ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 4.956799ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 5.014405ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 5.062697ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 5.051904ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 5.074547ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 5.099106ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 5.161823ms) Mar 22 01:25:27.385: INFO: (7) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 7.266625ms) Mar 22 01:25:27.393: INFO: (8) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 7.381463ms) Mar 22 01:25:27.393: INFO: (8) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 7.534019ms) Mar 22 01:25:27.394: INFO: (8) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 8.49563ms) Mar 22 01:25:27.394: INFO: (8) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 8.500432ms) Mar 22 01:25:27.394: INFO: (8) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 8.823181ms) Mar 22 01:25:27.394: INFO: (8) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 8.830772ms) Mar 22 01:25:27.394: INFO: (8) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 8.839495ms) Mar 22 01:25:27.394: INFO: (8) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test (200; 10.053042ms) Mar 22 01:25:27.395: INFO: (8) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 10.123833ms) Mar 22 01:25:27.395: INFO: (8) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 10.128281ms) Mar 22 01:25:27.396: INFO: (8) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 10.269777ms) Mar 22 01:25:27.399: INFO: (9) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 2.930957ms) Mar 22 01:25:27.399: INFO: (9) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 3.039645ms) Mar 22 01:25:27.399: INFO: (9) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 3.089597ms) Mar 22 01:25:27.399: INFO: (9) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.190821ms) Mar 22 01:25:27.399: INFO: (9) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test (200; 5.175589ms) Mar 22 01:25:27.401: INFO: (9) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 5.201583ms) Mar 22 01:25:27.401: INFO: (9) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 5.233295ms) Mar 22 01:25:27.401: INFO: (9) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 5.19109ms) Mar 22 01:25:27.402: INFO: (9) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 5.962803ms) Mar 22 01:25:27.402: INFO: (9) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 6.126605ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 3.942661ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 4.016284ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.198048ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.438126ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.483988ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.494606ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.513186ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.587418ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.563732ms) Mar 22 01:25:27.406: INFO: (10) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 4.530253ms) Mar 22 01:25:27.407: INFO: (10) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 4.624804ms) Mar 22 01:25:27.407: INFO: (10) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.574391ms) Mar 22 01:25:27.407: INFO: (10) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.61919ms) Mar 22 01:25:27.407: INFO: (10) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.629049ms) Mar 22 01:25:27.407: INFO: (10) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test<... (200; 5.17763ms) Mar 22 01:25:27.411: INFO: (11) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 3.495076ms) Mar 22 01:25:27.411: INFO: (11) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.539408ms) Mar 22 01:25:27.411: INFO: (11) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 3.938871ms) Mar 22 01:25:27.411: INFO: (11) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.1736ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.687195ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.995677ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.951923ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 5.020243ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 4.991557ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 5.054084ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 5.012118ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 5.082276ms) Mar 22 01:25:27.412: INFO: (11) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 5.038153ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 2.426567ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 2.560117ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 2.610716ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.0875ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 3.115305ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.090368ms) Mar 22 01:25:27.415: INFO: (12) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test (200; 4.190734ms) Mar 22 01:25:27.416: INFO: (12) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.228665ms) Mar 22 01:25:27.417: INFO: (12) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.240815ms) Mar 22 01:25:27.417: INFO: (12) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.424887ms) Mar 22 01:25:27.417: INFO: (12) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.526675ms) Mar 22 01:25:27.420: INFO: (13) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 2.806562ms) Mar 22 01:25:27.420: INFO: (13) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 3.407511ms) Mar 22 01:25:27.420: INFO: (13) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 3.492281ms) Mar 22 01:25:27.420: INFO: (13) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.494455ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 3.736041ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 3.7855ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 3.855991ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.775824ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.02451ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.029544ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.136285ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.029261ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.106169ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 4.283826ms) Mar 22 01:25:27.421: INFO: (13) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: test<... (200; 4.64647ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.704194ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 4.70442ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.726697ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.734033ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.77108ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.747865ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.902265ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 4.956686ms) Mar 22 01:25:27.426: INFO: (14) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.949859ms) Mar 22 01:25:27.430: INFO: (15) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 3.101408ms) Mar 22 01:25:27.430: INFO: (15) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.103276ms) Mar 22 01:25:27.430: INFO: (15) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.849234ms) Mar 22 01:25:27.430: INFO: (15) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.038524ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 3.949798ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.00072ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.924203ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.096708ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 4.334892ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.26143ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.394816ms) Mar 22 01:25:27.431: INFO: (15) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.413003ms) Mar 22 01:25:27.433: INFO: (16) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 3.791895ms) Mar 22 01:25:27.435: INFO: (16) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.804516ms) Mar 22 01:25:27.435: INFO: (16) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 3.857005ms) Mar 22 01:25:27.435: INFO: (16) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.865036ms) Mar 22 01:25:27.435: INFO: (16) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.314987ms) Mar 22 01:25:27.435: INFO: (16) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.343848ms) Mar 22 01:25:27.435: INFO: (16) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 4.338459ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.546516ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.723365ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.74745ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.896317ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.908215ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 5.020744ms) Mar 22 01:25:27.436: INFO: (16) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 5.059586ms) Mar 22 01:25:27.439: INFO: (17) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 2.788839ms) Mar 22 01:25:27.439: INFO: (17) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 2.766433ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 3.454458ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:460/proxy/: tls baz (200; 3.920272ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 3.999921ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 3.963963ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.020486ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 4.072805ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.165036ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.124699ms) Mar 22 01:25:27.440: INFO: (17) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.15695ms) Mar 22 01:25:27.441: INFO: (17) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.3408ms) Mar 22 01:25:27.443: INFO: (18) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:160/proxy/: foo (200; 2.507153ms) Mar 22 01:25:27.443: INFO: (18) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 2.707378ms) Mar 22 01:25:27.444: INFO: (18) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:1080/proxy/: ... (200; 2.967788ms) Mar 22 01:25:27.444: INFO: (18) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 3.287056ms) Mar 22 01:25:27.444: INFO: (18) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 3.598392ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.226472ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.19723ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.344181ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.363809ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.491724ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:462/proxy/: tls qux (200; 4.816149ms) Mar 22 01:25:27.445: INFO: (18) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: ... (200; 2.644984ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname1/proxy/: foo (200; 4.072217ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/services/proxy-service-xrs65:portname2/proxy/: bar (200; 4.235656ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname1/proxy/: foo (200; 4.278606ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:162/proxy/: bar (200; 4.58686ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/services/http:proxy-service-xrs65:portname2/proxy/: bar (200; 4.632093ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc:1080/proxy/: test<... (200; 4.700119ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/pods/proxy-service-xrs65-fgglc/proxy/: test (200; 4.649589ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/pods/http:proxy-service-xrs65-fgglc:160/proxy/: foo (200; 4.674249ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname1/proxy/: tls baz (200; 4.67742ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/services/https:proxy-service-xrs65:tlsportname2/proxy/: tls qux (200; 4.690795ms) Mar 22 01:25:27.450: INFO: (19) /api/v1/namespaces/proxy-3601/pods/https:proxy-service-xrs65-fgglc:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:25:46.176: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:25:48.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973146, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973146, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973146, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973146, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:25:51.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:25:51.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4439" for this suite. STEP: Destroying namespace "webhook-4439-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.403 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":330,"completed":204,"skipped":3558,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:25:51.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:25:51.590: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:25:52.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8142" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":330,"completed":205,"skipped":3562,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:25:52.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:26:05.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7095" for this suite. • [SLOW TEST:13.656 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":330,"completed":206,"skipped":3595,"failed":15,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] CronJob should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:26:05.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating Mar 22 01:26:06.014: FAIL: Unexpected error: <*errors.StatusError | 0xc004481400>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:327 +0x345 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-5175". STEP: Found 0 events. Mar 22 01:26:06.022: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:26:06.022: INFO: Mar 22 01:26:06.025: INFO: Logging node info for node latest-control-plane Mar 22 01:26:06.028: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7010970 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:24:46 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:26:06.028: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:26:06.030: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:26:06.036: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container coredns ready: true, restart count 0 Mar 22 01:26:06.036: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 01:26:06.036: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:26:06.036: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 01:26:06.036: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:26:06.036: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:26:06.036: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container etcd ready: true, restart count 0 Mar 22 01:26:06.036: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:26:06.036: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.036: INFO: Container coredns ready: true, restart count 0 W0322 01:26:06.041054 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:26:06.119: INFO: Latency metrics for node latest-control-plane Mar 22 01:26:06.119: INFO: Logging node info for node latest-worker Mar 22 01:26:06.124: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7011159 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 00:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:25:06 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:25:06 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:25:06 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:25:06 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:26:06.125: INFO: Logging kubelet events for node latest-worker Mar 22 01:26:06.127: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:26:06.134: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.134: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:26:06.134: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.134: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:26:06.134: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.134: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:26:06.134: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.134: INFO: Container chaos-mesh ready: true, restart count 0 W0322 01:26:06.139585 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:26:06.355: INFO: Latency metrics for node latest-worker Mar 22 01:26:06.355: INFO: Logging node info for node latest-worker2 Mar 22 01:26:06.360: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7010894 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 00:44:05 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 00:44:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:24:16 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:26:06.361: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:26:06.365: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:26:06.368: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.368: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:26:06.368: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.368: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:26:06.368: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:26:06.368: INFO: Container chaos-daemon ready: true, restart count 0 W0322 01:26:06.372215 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:26:06.599: INFO: Latency metrics for node latest-worker2 Mar 22 01:26:06.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5175" for this suite. • Failure [0.716 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should support CronJob API operations [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:26:06.014: Unexpected error: <*errors.StatusError | 0xc004481400>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:327 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":330,"completed":206,"skipped":3657,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:26:06.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:26:07.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:26:09.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 01:26:11.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973167, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:26:14.597: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:26:15.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5728" for this suite. STEP: Destroying namespace "webhook-5728-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.796 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":330,"completed":207,"skipped":3758,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:26:15.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0322 01:26:29.133219 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:27:31.153: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Mar 22 01:27:31.153: INFO: Deleting pod "simpletest-rc-to-be-deleted-57xj2" in namespace "gc-3593" Mar 22 01:27:31.195: INFO: Deleting pod "simpletest-rc-to-be-deleted-8lbvk" in namespace "gc-3593" Mar 22 01:27:31.270: INFO: Deleting pod "simpletest-rc-to-be-deleted-csjcv" in namespace "gc-3593" Mar 22 01:27:31.323: INFO: Deleting pod "simpletest-rc-to-be-deleted-d7drk" in namespace "gc-3593" Mar 22 01:27:31.689: INFO: Deleting pod "simpletest-rc-to-be-deleted-dlgbr" in namespace "gc-3593" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:27:31.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3593" for this suite. • [SLOW TEST:76.627 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":330,"completed":208,"skipped":3764,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:27:32.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:27:33.589: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:27:35.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 01:27:37.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973253, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:27:40.694: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:27:52.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1940" for this suite. STEP: Destroying namespace "webhook-1940-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:21.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":330,"completed":209,"skipped":3767,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:27:53.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command Mar 22 01:27:53.214: INFO: Waiting up to 5m0s for pod "var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934" in namespace "var-expansion-6410" to be "Succeeded or Failed" Mar 22 01:27:53.237: INFO: Pod "var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934": Phase="Pending", Reason="", readiness=false. Elapsed: 22.67256ms Mar 22 01:27:55.242: INFO: Pod "var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027936515s Mar 22 01:27:57.248: INFO: Pod "var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033663544s STEP: Saw pod success Mar 22 01:27:57.248: INFO: Pod "var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934" satisfied condition "Succeeded or Failed" Mar 22 01:27:57.251: INFO: Trying to get logs from node latest-worker pod var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934 container dapi-container: STEP: delete the pod Mar 22 01:27:57.416: INFO: Waiting for pod var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934 to disappear Mar 22 01:27:57.512: INFO: Pod var-expansion-8891a05e-b0b1-43f7-9eaf-dae4af812934 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:27:57.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6410" for this suite. •{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":330,"completed":210,"skipped":3788,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:27:57.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:27:57.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4661" for this suite. •{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":330,"completed":211,"skipped":3805,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:27:57.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 22 01:27:57.883: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 01:27:57.892: INFO: Waiting for terminating namespaces to be deleted... Mar 22 01:27:57.896: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 22 01:27:57.901: INFO: chaos-controller-manager-69c479c674-rdmrr from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.901: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 01:27:57.901: INFO: chaos-daemon-vb9xf from default started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.901: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:27:57.901: INFO: kindnet-l4mzm from kube-system started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.901: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:27:57.901: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.901: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:27:57.901: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 22 01:27:57.906: INFO: chaos-daemon-4zjcg from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.906: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:27:57.906: INFO: kindnet-7qb7q from kube-system started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.906: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:27:57.906: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 01:27:57.906: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e80c6096-726f-443f-aacc-449d4d566176 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e80c6096-726f-443f-aacc-449d4d566176 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e80c6096-726f-443f-aacc-449d4d566176 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:28:06.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1534" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.290 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":330,"completed":212,"skipped":3865,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:28:06.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD Mar 22 01:28:06.207: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:28:26.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4195" for this suite. • [SLOW TEST:19.918 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":330,"completed":213,"skipped":3895,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSS ------------------------------ [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:28:26.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments Mar 22 01:28:26.362: INFO: Waiting up to 5m0s for pod "client-containers-daf1f29d-4160-429c-abdb-422a71713d79" in namespace "containers-7733" to be "Succeeded or Failed" Mar 22 01:28:26.426: INFO: Pod "client-containers-daf1f29d-4160-429c-abdb-422a71713d79": Phase="Pending", Reason="", readiness=false. Elapsed: 64.069866ms Mar 22 01:28:28.430: INFO: Pod "client-containers-daf1f29d-4160-429c-abdb-422a71713d79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067907799s Mar 22 01:28:30.524: INFO: Pod "client-containers-daf1f29d-4160-429c-abdb-422a71713d79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161320119s STEP: Saw pod success Mar 22 01:28:30.524: INFO: Pod "client-containers-daf1f29d-4160-429c-abdb-422a71713d79" satisfied condition "Succeeded or Failed" Mar 22 01:28:30.527: INFO: Trying to get logs from node latest-worker2 pod client-containers-daf1f29d-4160-429c-abdb-422a71713d79 container agnhost-container: STEP: delete the pod Mar 22 01:28:30.749: INFO: Waiting for pod client-containers-daf1f29d-4160-429c-abdb-422a71713d79 to disappear Mar 22 01:28:30.788: INFO: Pod client-containers-daf1f29d-4160-429c-abdb-422a71713d79 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:28:30.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7733" for this suite. •{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":330,"completed":214,"skipped":3900,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:28:30.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 22 01:28:30.995: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:28:33.000: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:28:35.000: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 22 01:28:35.054: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:28:37.059: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:28:39.059: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:28:41.059: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 22 01:28:41.072: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:41.115: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:43.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:43.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:45.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:45.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:47.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:47.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:49.117: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:49.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:51.117: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:51.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:53.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:53.120: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:55.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:55.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:57.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:57.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:28:59.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:28:59.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:01.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:01.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:03.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:03.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:05.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:05.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:07.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:07.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:09.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:09.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:11.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:11.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:13.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:13.120: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:15.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:15.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:17.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:17.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:19.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:19.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:21.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:21.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:23.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:23.120: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:25.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:25.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:27.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:27.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:29.117: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:29.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:31.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:31.121: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:33.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:33.122: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:35.116: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:35.120: INFO: Pod pod-with-poststart-exec-hook still exists Mar 22 01:29:37.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 22 01:29:37.121: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:29:37.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8533" for this suite. • [SLOW TEST:66.332 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":330,"completed":215,"skipped":3912,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:29:37.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-165a7822-3a4c-4a29-a95e-458ce287b6a9 STEP: Creating the pod Mar 22 01:29:37.298: INFO: The status of Pod pod-configmaps-c699afa8-4a0b-4d62-aa0d-872597b12bbf is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:29:39.304: INFO: The status of Pod pod-configmaps-c699afa8-4a0b-4d62-aa0d-872597b12bbf is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:29:41.303: INFO: The status of Pod pod-configmaps-c699afa8-4a0b-4d62-aa0d-872597b12bbf is Running (Ready = true) STEP: Updating configmap configmap-test-upd-165a7822-3a4c-4a29-a95e-458ce287b6a9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:29:43.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1241" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":330,"completed":216,"skipped":3917,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSS ------------------------------ [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:29:43.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota Mar 22 01:29:43.464: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 22 01:29:48.467: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:29:48.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3777" for this suite. • [SLOW TEST:5.299 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":330,"completed":217,"skipped":3923,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:29:48.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-920919af-6ec1-4b89-98a2-b518043c042f STEP: Creating a pod to test consume secrets Mar 22 01:29:48.846: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a" in namespace "projected-6090" to be "Succeeded or Failed" Mar 22 01:29:48.960: INFO: Pod "pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a": Phase="Pending", Reason="", readiness=false. Elapsed: 114.227617ms Mar 22 01:29:50.964: INFO: Pod "pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118359929s Mar 22 01:29:53.014: INFO: Pod "pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168070982s Mar 22 01:29:55.094: INFO: Pod "pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248746755s STEP: Saw pod success Mar 22 01:29:55.095: INFO: Pod "pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a" satisfied condition "Succeeded or Failed" Mar 22 01:29:55.182: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a container projected-secret-volume-test: STEP: delete the pod Mar 22 01:29:55.649: INFO: Waiting for pod pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a to disappear Mar 22 01:29:55.775: INFO: Pod pod-projected-secrets-684f2a40-e5dd-415b-866e-0265ef82db3a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:29:55.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6090" for this suite. • [SLOW TEST:7.098 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":218,"skipped":3950,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:29:55.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-c324e1bd-0c13-4920-8510-6b6e0cd3221a STEP: Creating secret with name secret-projected-all-test-volume-64ccb67d-8ab9-449a-ace9-5af5cb234f22 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 22 01:29:56.280: INFO: Waiting up to 5m0s for pod "projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5" in namespace "projected-5787" to be "Succeeded or Failed" Mar 22 01:29:56.500: INFO: Pod "projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 219.340223ms Mar 22 01:29:58.505: INFO: Pod "projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224326215s Mar 22 01:30:00.511: INFO: Pod "projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230169673s STEP: Saw pod success Mar 22 01:30:00.511: INFO: Pod "projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5" satisfied condition "Succeeded or Failed" Mar 22 01:30:00.521: INFO: Trying to get logs from node latest-worker pod projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5 container projected-all-volume-test: STEP: delete the pod Mar 22 01:30:00.815: INFO: Waiting for pod projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5 to disappear Mar 22 01:30:00.832: INFO: Pod projected-volume-afd0dbbe-6a8f-411c-866d-f23b9a8e2cf5 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:00.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5787" for this suite. • [SLOW TEST:5.055 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":330,"completed":219,"skipped":3961,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:00.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 01:30:01.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32" in namespace "downward-api-7509" to be "Succeeded or Failed" Mar 22 01:30:01.090: INFO: Pod "downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32": Phase="Pending", Reason="", readiness=false. Elapsed: 81.192277ms Mar 22 01:30:03.094: INFO: Pod "downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085521625s Mar 22 01:30:05.099: INFO: Pod "downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090405148s STEP: Saw pod success Mar 22 01:30:05.099: INFO: Pod "downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32" satisfied condition "Succeeded or Failed" Mar 22 01:30:05.106: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32 container client-container: STEP: delete the pod Mar 22 01:30:05.175: INFO: Waiting for pod downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32 to disappear Mar 22 01:30:05.191: INFO: Pod downwardapi-volume-d68e5497-07e1-4619-b635-d466ce806a32 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:05.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7509" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":220,"skipped":3963,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:05.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-5d21de95-4709-4901-8989-d6f6b6587624 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:05.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-244" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":330,"completed":221,"skipped":3978,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:05.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod Mar 22 01:30:05.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.28 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 22 01:30:09.485: INFO: stderr: "" Mar 22 01:30:09.485: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. Mar 22 01:30:09.485: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 22 01:30:09.485: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2415" to be "running and ready, or succeeded" Mar 22 01:30:09.496: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.559257ms Mar 22 01:30:11.613: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128123336s Mar 22 01:30:13.618: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.13330617s Mar 22 01:30:13.619: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 22 01:30:13.619: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 22 01:30:13.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 logs logs-generator logs-generator' Mar 22 01:30:13.738: INFO: stderr: "" Mar 22 01:30:13.738: INFO: stdout: "I0322 01:30:12.381548 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/vmv 524\nI0322 01:30:12.581740 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/ps4g 268\nI0322 01:30:12.784986 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/4kkn 431\nI0322 01:30:12.981688 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/kr27 490\nI0322 01:30:13.181686 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/lmsk 307\nI0322 01:30:13.381680 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/dmgz 317\nI0322 01:30:13.581760 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/xrfs 300\n" STEP: limiting log lines Mar 22 01:30:13.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 logs logs-generator logs-generator --tail=1' Mar 22 01:30:13.842: INFO: stderr: "" Mar 22 01:30:13.843: INFO: stdout: "I0322 01:30:13.781672 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6v9g 288\n" Mar 22 01:30:13.843: INFO: got output "I0322 01:30:13.781672 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6v9g 288\n" STEP: limiting log bytes Mar 22 01:30:13.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 logs logs-generator logs-generator --limit-bytes=1' Mar 22 01:30:13.954: INFO: stderr: "" Mar 22 01:30:13.954: INFO: stdout: "I" Mar 22 01:30:13.954: INFO: got output "I" STEP: exposing timestamps Mar 22 01:30:13.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 logs logs-generator logs-generator --tail=1 --timestamps' Mar 22 01:30:14.055: INFO: stderr: "" Mar 22 01:30:14.055: INFO: stdout: "2021-03-22T01:30:13.981891266Z I0322 01:30:13.981724 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/zwh 471\n" Mar 22 01:30:14.055: INFO: got output "2021-03-22T01:30:13.981891266Z I0322 01:30:13.981724 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/zwh 471\n" STEP: restricting to a time range Mar 22 01:30:16.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 logs logs-generator logs-generator --since=1s' Mar 22 01:30:16.687: INFO: stderr: "" Mar 22 01:30:16.687: INFO: stdout: "I0322 01:30:15.781679 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/d2l5 511\nI0322 01:30:15.981782 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/khwl 302\nI0322 01:30:16.181724 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/jlrh 480\nI0322 01:30:16.381713 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/v9m 416\nI0322 01:30:16.581772 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/b2n 286\n" Mar 22 01:30:16.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 logs logs-generator logs-generator --since=24h' Mar 22 01:30:16.798: INFO: stderr: "" Mar 22 01:30:16.798: INFO: stdout: "I0322 01:30:12.381548 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/vmv 524\nI0322 01:30:12.581740 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/ps4g 268\nI0322 01:30:12.784986 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/4kkn 431\nI0322 01:30:12.981688 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/kr27 490\nI0322 01:30:13.181686 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/lmsk 307\nI0322 01:30:13.381680 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/dmgz 317\nI0322 01:30:13.581760 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/xrfs 300\nI0322 01:30:13.781672 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/6v9g 288\nI0322 01:30:13.981724 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/zwh 471\nI0322 01:30:14.181690 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/sbch 578\nI0322 01:30:14.381691 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/t5rx 328\nI0322 01:30:14.581764 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/p9k 444\nI0322 01:30:14.781732 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/4ls2 313\nI0322 01:30:14.981749 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/hvv 281\nI0322 01:30:15.181608 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/vrr 387\nI0322 01:30:15.381719 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/lw57 315\nI0322 01:30:15.581736 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/n5b7 569\nI0322 01:30:15.781679 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/d2l5 511\nI0322 01:30:15.981782 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/khwl 302\nI0322 01:30:16.181724 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/jlrh 480\nI0322 01:30:16.381713 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/v9m 416\nI0322 01:30:16.581772 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/b2n 286\nI0322 01:30:16.781739 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/4c6m 372\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 Mar 22 01:30:16.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2415 delete pod logs-generator' Mar 22 01:30:25.315: INFO: stderr: "" Mar 22 01:30:25.315: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:25.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2415" for this suite. • [SLOW TEST:20.024 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":330,"completed":222,"skipped":4011,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:25.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-b53aa3bc-5887-438f-83a1-70f70ad81eb8 STEP: Creating a pod to test consume secrets Mar 22 01:30:25.483: INFO: Waiting up to 5m0s for pod "pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667" in namespace "secrets-5531" to be "Succeeded or Failed" Mar 22 01:30:25.503: INFO: Pod "pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667": Phase="Pending", Reason="", readiness=false. Elapsed: 20.762313ms Mar 22 01:30:27.538: INFO: Pod "pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055071095s Mar 22 01:30:29.543: INFO: Pod "pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667": Phase="Running", Reason="", readiness=true. Elapsed: 4.0608247s Mar 22 01:30:31.549: INFO: Pod "pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066441383s STEP: Saw pod success Mar 22 01:30:31.549: INFO: Pod "pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667" satisfied condition "Succeeded or Failed" Mar 22 01:30:31.553: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667 container secret-volume-test: STEP: delete the pod Mar 22 01:30:31.613: INFO: Waiting for pod pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667 to disappear Mar 22 01:30:31.629: INFO: Pod pod-secrets-acee16d1-1f69-405a-9a42-a6febc758667 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:31.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5531" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":223,"skipped":4021,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:31.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:30:31.712: INFO: Creating simple deployment test-new-deployment Mar 22 01:30:31.754: INFO: deployment "test-new-deployment" doesn't have the required revision set Mar 22 01:30:33.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973431, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973431, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973431, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973431, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 Mar 22 01:30:35.905: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-1679 52747195-2758-48cb-887c-528c9c1b4d99 7012855 3 2021-03-22 01:30:31 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-03-22 01:30:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-03-22 01:30:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001dcac68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-03-22 01:30:35 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2021-03-22 01:30:35 +0000 UTC,LastTransitionTime:2021-03-22 01:30:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 22 01:30:35.962: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-1679 28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3 7012862 3 2021-03-22 01:30:31 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 52747195-2758-48cb-887c-528c9c1b4d99 0xc003888110 0xc003888111}] [] [{kube-controller-manager Update apps/v1 2021-03-22 01:30:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52747195-2758-48cb-887c-528c9c1b4d99\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003888178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 22 01:30:36.017: INFO: Pod "test-new-deployment-847dcfb7fb-27lbk" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-27lbk test-new-deployment-847dcfb7fb- deployment-1679 3f471e55-ded2-499f-9061-6cde85907d0e 7012864 0 2021-03-22 01:30:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3 0xc003888507 0xc003888508}] [] [{kube-controller-manager Update v1 2021-03-22 01:30:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 01:30:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kk95v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kk95v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kk95v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2021-03-22 01:30:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 01:30:36.017: INFO: Pod "test-new-deployment-847dcfb7fb-2h66c" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-2h66c test-new-deployment-847dcfb7fb- deployment-1679 d574300b-4c29-4cb2-b746-7e9113a03176 7012865 0 2021-03-22 01:30:35 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3 0xc003888697 0xc003888698}] [] [{kube-controller-manager Update v1 2021-03-22 01:30:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kk95v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kk95v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kk95v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 01:30:36.017: INFO: Pod "test-new-deployment-847dcfb7fb-4h5cc" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-4h5cc test-new-deployment-847dcfb7fb- deployment-1679 c6d73d8b-6529-4d74-ac23-58b273ff5b70 7012845 0 2021-03-22 01:30:31 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3 0xc0038887a0 0xc0038887a1}] [] [{kube-controller-manager Update v1 2021-03-22 01:30:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28fcd3ea-b0d3-4b44-b2f0-2d6ff79ba0d3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-22 01:30:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kk95v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kk95v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kk95v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-22 01:30:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.233,StartTime:2021-03-22 01:30:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-03-22 01:30:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://590aa3501c604d8fcc273c023eb3691b59a02e9cf0b407ba74d4d021121d6972,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:36.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1679" for this suite. •{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":330,"completed":224,"skipped":4054,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:36.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:30:37.015: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:30:39.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973437, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973437, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973437, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973436, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 01:30:41.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973437, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973437, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973437, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751973436, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:30:44.866: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:46.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2559" for this suite. STEP: Destroying namespace "webhook-2559-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.322 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":330,"completed":225,"skipped":4060,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:46.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-6eab0fed-3022-4c76-b90e-fb71de1eb4bb STEP: Creating a pod to test consume configMaps Mar 22 01:30:46.643: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61" in namespace "projected-8534" to be "Succeeded or Failed" Mar 22 01:30:46.676: INFO: Pod "pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61": Phase="Pending", Reason="", readiness=false. Elapsed: 32.730436ms Mar 22 01:30:48.681: INFO: Pod "pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037446759s Mar 22 01:30:50.686: INFO: Pod "pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61": Phase="Running", Reason="", readiness=true. Elapsed: 4.042879861s Mar 22 01:30:52.692: INFO: Pod "pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048380644s STEP: Saw pod success Mar 22 01:30:52.692: INFO: Pod "pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61" satisfied condition "Succeeded or Failed" Mar 22 01:30:52.695: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61 container agnhost-container: STEP: delete the pod Mar 22 01:30:52.759: INFO: Waiting for pod pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61 to disappear Mar 22 01:30:52.766: INFO: Pod pod-projected-configmaps-7a5abaff-e567-4b82-98e0-88a7bb965d61 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:52.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8534" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":330,"completed":226,"skipped":4062,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:52.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-9969ee02-25f0-456d-8d5b-bcce4340e1d9 STEP: Creating a pod to test consume secrets Mar 22 01:30:52.942: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4" in namespace "projected-1700" to be "Succeeded or Failed" Mar 22 01:30:52.960: INFO: Pod "pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.828931ms Mar 22 01:30:54.965: INFO: Pod "pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022926081s Mar 22 01:30:56.970: INFO: Pod "pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027683921s STEP: Saw pod success Mar 22 01:30:56.970: INFO: Pod "pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4" satisfied condition "Succeeded or Failed" Mar 22 01:30:56.972: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4 container projected-secret-volume-test: STEP: delete the pod Mar 22 01:30:57.002: INFO: Waiting for pod pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4 to disappear Mar 22 01:30:57.019: INFO: Pod pod-projected-secrets-4f267271-f532-4b47-9d9f-882d1494bff4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:30:57.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1700" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":227,"skipped":4080,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:30:57.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0322 01:30:58.281799 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:32:00.301: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:32:00.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7310" for this suite. • [SLOW TEST:63.282 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":330,"completed":228,"skipped":4096,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:32:00.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command Mar 22 01:32:00.423: INFO: Waiting up to 5m0s for pod "client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3" in namespace "containers-3658" to be "Succeeded or Failed" Mar 22 01:32:00.427: INFO: Pod "client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.578724ms Mar 22 01:32:02.432: INFO: Pod "client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008925742s Mar 22 01:32:04.436: INFO: Pod "client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01353589s STEP: Saw pod success Mar 22 01:32:04.437: INFO: Pod "client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3" satisfied condition "Succeeded or Failed" Mar 22 01:32:04.440: INFO: Trying to get logs from node latest-worker2 pod client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3 container agnhost-container: STEP: delete the pod Mar 22 01:32:04.628: INFO: Waiting for pod client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3 to disappear Mar 22 01:32:04.685: INFO: Pod client-containers-5819eb41-3762-4151-a6b7-a7d6805547a3 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:32:04.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3658" for this suite. •{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":330,"completed":229,"skipped":4125,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:32:04.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6897 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6897 STEP: creating replication controller externalsvc in namespace services-6897 I0322 01:32:05.038783 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6897, replica count: 2 I0322 01:32:08.090204 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:32:11.090404 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 01:32:14.091859 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 22 01:32:14.137: INFO: Creating new exec pod Mar 22 01:32:18.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-6897 exec execpod69tds -- /bin/sh -x -c nslookup clusterip-service.services-6897.svc.cluster.local' Mar 22 01:32:18.490: INFO: stderr: "+ nslookup clusterip-service.services-6897.svc.cluster.local\n" Mar 22 01:32:18.490: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6897.svc.cluster.local\tcanonical name = externalsvc.services-6897.svc.cluster.local.\nName:\texternalsvc.services-6897.svc.cluster.local\nAddress: 10.96.240.26\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6897, will wait for the garbage collector to delete the pods Mar 22 01:32:18.552: INFO: Deleting ReplicationController externalsvc took: 6.604104ms Mar 22 01:32:18.653: INFO: Terminating ReplicationController externalsvc pods took: 100.775536ms Mar 22 01:32:55.213: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:32:55.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6897" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:50.442 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":330,"completed":230,"skipped":4130,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:32:55.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 22 01:32:55.384: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 01:32:55.416: INFO: Waiting for terminating namespaces to be deleted... Mar 22 01:32:55.419: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 22 01:32:55.424: INFO: chaos-controller-manager-69c479c674-rdmrr from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.424: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 01:32:55.424: INFO: chaos-daemon-vb9xf from default started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.424: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:32:55.424: INFO: kindnet-l4mzm from kube-system started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.424: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:32:55.424: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.424: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:32:55.424: INFO: execpod69tds from services-6897 started at 2021-03-22 01:32:14 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.424: INFO: Container agnhost-container ready: true, restart count 0 Mar 22 01:32:55.424: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 22 01:32:55.429: INFO: chaos-daemon-4zjcg from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.429: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:32:55.429: INFO: kindnet-7qb7q from kube-system started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.429: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:32:55.429: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 01:32:55.429: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 22 01:32:55.575: INFO: Pod chaos-controller-manager-69c479c674-rdmrr requesting resource cpu=25m on Node latest-worker Mar 22 01:32:55.575: INFO: Pod chaos-daemon-4zjcg requesting resource cpu=0m on Node latest-worker2 Mar 22 01:32:55.575: INFO: Pod chaos-daemon-vb9xf requesting resource cpu=0m on Node latest-worker Mar 22 01:32:55.575: INFO: Pod kindnet-7qb7q requesting resource cpu=100m on Node latest-worker2 Mar 22 01:32:55.575: INFO: Pod kindnet-l4mzm requesting resource cpu=100m on Node latest-worker Mar 22 01:32:55.575: INFO: Pod kube-proxy-5wvjm requesting resource cpu=0m on Node latest-worker Mar 22 01:32:55.575: INFO: Pod kube-proxy-7q92q requesting resource cpu=0m on Node latest-worker2 Mar 22 01:32:55.575: INFO: Pod execpod69tds requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 22 01:32:55.575: INFO: Creating a pod which consumes cpu=11112m on Node latest-worker Mar 22 01:32:55.581: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2.166e861d0fefc531], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7158/filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2.166e861d6afac624], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2.166e861de3326ac6], Reason = [Created], Message = [Created container filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2] STEP: Considering event: Type = [Normal], Name = [filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2.166e861e01136eba], Reason = [Started], Message = [Started container filler-pod-56f098ad-5e2e-4744-a82d-d3d087561bf2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1319790-c5e2-4869-b938-f044c24216be.166e861d11490e44], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7158/filler-pod-c1319790-c5e2-4869-b938-f044c24216be to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1319790-c5e2-4869-b938-f044c24216be.166e861d7b54648d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1319790-c5e2-4869-b938-f044c24216be.166e861e0e24cbaf], Reason = [Created], Message = [Created container filler-pod-c1319790-c5e2-4869-b938-f044c24216be] STEP: Considering event: Type = [Normal], Name = [filler-pod-c1319790-c5e2-4869-b938-f044c24216be.166e861e1dd064e3], Reason = [Started], Message = [Started container filler-pod-c1319790-c5e2-4869-b938-f044c24216be] STEP: Considering event: Type = [Warning], Name = [additional-pod.166e861e77025548], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:33:02.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7158" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.548 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":330,"completed":231,"skipped":4179,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:33:02.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:33:57.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5997" for this suite. STEP: Destroying namespace "nsdeletetest-2871" for this suite. Mar 22 01:33:57.292: INFO: Namespace nsdeletetest-2871 was already deleted STEP: Destroying namespace "nsdeletetest-472" for this suite. • [SLOW TEST:54.487 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":330,"completed":232,"skipped":4182,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:33:57.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition Mar 22 01:33:57.380: INFO: Waiting up to 5m0s for pod "var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671" in namespace "var-expansion-3058" to be "Succeeded or Failed" Mar 22 01:33:57.405: INFO: Pod "var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671": Phase="Pending", Reason="", readiness=false. Elapsed: 25.146091ms Mar 22 01:33:59.409: INFO: Pod "var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02930241s Mar 22 01:34:01.414: INFO: Pod "var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671": Phase="Running", Reason="", readiness=true. Elapsed: 4.034120978s Mar 22 01:34:03.419: INFO: Pod "var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039318023s STEP: Saw pod success Mar 22 01:34:03.419: INFO: Pod "var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671" satisfied condition "Succeeded or Failed" Mar 22 01:34:03.422: INFO: Trying to get logs from node latest-worker pod var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671 container dapi-container: STEP: delete the pod Mar 22 01:34:03.473: INFO: Waiting for pod var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671 to disappear Mar 22 01:34:03.490: INFO: Pod var-expansion-203fb18f-3aec-402c-9a7b-a05aa55ae671 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:34:03.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3058" for this suite. • [SLOW TEST:6.232 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":330,"completed":233,"skipped":4191,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:34:03.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:34:10.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7177" for this suite. • [SLOW TEST:7.079 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":330,"completed":234,"skipped":4191,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} S ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:34:10.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:34:10.929: INFO: Checking APIGroup: apiregistration.k8s.io Mar 22 01:34:10.930: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Mar 22 01:34:10.930: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.930: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Mar 22 01:34:10.930: INFO: Checking APIGroup: apps Mar 22 01:34:10.931: INFO: PreferredVersion.GroupVersion: apps/v1 Mar 22 01:34:10.931: INFO: Versions found [{apps/v1 v1}] Mar 22 01:34:10.931: INFO: apps/v1 matches apps/v1 Mar 22 01:34:10.931: INFO: Checking APIGroup: events.k8s.io Mar 22 01:34:10.932: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Mar 22 01:34:10.932: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.932: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Mar 22 01:34:10.932: INFO: Checking APIGroup: authentication.k8s.io Mar 22 01:34:10.933: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Mar 22 01:34:10.933: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.933: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Mar 22 01:34:10.933: INFO: Checking APIGroup: authorization.k8s.io Mar 22 01:34:10.934: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Mar 22 01:34:10.934: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.934: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Mar 22 01:34:10.934: INFO: Checking APIGroup: autoscaling Mar 22 01:34:10.935: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Mar 22 01:34:10.935: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Mar 22 01:34:10.935: INFO: autoscaling/v1 matches autoscaling/v1 Mar 22 01:34:10.935: INFO: Checking APIGroup: batch Mar 22 01:34:10.935: INFO: PreferredVersion.GroupVersion: batch/v1 Mar 22 01:34:10.935: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Mar 22 01:34:10.935: INFO: batch/v1 matches batch/v1 Mar 22 01:34:10.935: INFO: Checking APIGroup: certificates.k8s.io Mar 22 01:34:10.936: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Mar 22 01:34:10.936: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.936: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Mar 22 01:34:10.936: INFO: Checking APIGroup: networking.k8s.io Mar 22 01:34:10.937: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Mar 22 01:34:10.937: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.937: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Mar 22 01:34:10.937: INFO: Checking APIGroup: extensions Mar 22 01:34:10.938: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Mar 22 01:34:10.938: INFO: Versions found [{extensions/v1beta1 v1beta1}] Mar 22 01:34:10.938: INFO: extensions/v1beta1 matches extensions/v1beta1 Mar 22 01:34:10.938: INFO: Checking APIGroup: policy Mar 22 01:34:10.939: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Mar 22 01:34:10.939: INFO: Versions found [{policy/v1beta1 v1beta1}] Mar 22 01:34:10.939: INFO: policy/v1beta1 matches policy/v1beta1 Mar 22 01:34:10.939: INFO: Checking APIGroup: rbac.authorization.k8s.io Mar 22 01:34:10.940: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Mar 22 01:34:10.940: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.940: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Mar 22 01:34:10.940: INFO: Checking APIGroup: storage.k8s.io Mar 22 01:34:10.941: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Mar 22 01:34:10.941: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.941: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Mar 22 01:34:10.941: INFO: Checking APIGroup: admissionregistration.k8s.io Mar 22 01:34:10.942: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Mar 22 01:34:10.942: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.942: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Mar 22 01:34:10.942: INFO: Checking APIGroup: apiextensions.k8s.io Mar 22 01:34:10.942: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Mar 22 01:34:10.942: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.942: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Mar 22 01:34:10.942: INFO: Checking APIGroup: scheduling.k8s.io Mar 22 01:34:10.943: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Mar 22 01:34:10.943: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.943: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Mar 22 01:34:10.943: INFO: Checking APIGroup: coordination.k8s.io Mar 22 01:34:10.944: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Mar 22 01:34:10.944: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.944: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Mar 22 01:34:10.944: INFO: Checking APIGroup: node.k8s.io Mar 22 01:34:10.945: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Mar 22 01:34:10.945: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.945: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Mar 22 01:34:10.945: INFO: Checking APIGroup: discovery.k8s.io Mar 22 01:34:10.946: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Mar 22 01:34:10.946: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.946: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Mar 22 01:34:10.946: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Mar 22 01:34:10.947: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Mar 22 01:34:10.947: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Mar 22 01:34:10.947: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Mar 22 01:34:10.947: INFO: Checking APIGroup: pingcap.com Mar 22 01:34:10.948: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Mar 22 01:34:10.948: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Mar 22 01:34:10.948: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:34:10.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7146" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":330,"completed":235,"skipped":4192,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} ------------------------------ [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:34:10.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod Mar 22 01:36:11.652: INFO: Successfully updated pod "var-expansion-5afc0878-4241-4849-997b-64c00418c5df" STEP: waiting for pod running STEP: deleting the pod gracefully Mar 22 01:36:15.786: INFO: Deleting pod "var-expansion-5afc0878-4241-4849-997b-64c00418c5df" in namespace "var-expansion-1546" Mar 22 01:36:15.792: INFO: Wait up to 5m0s for pod "var-expansion-5afc0878-4241-4849-997b-64c00418c5df" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:37:35.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1546" for this suite. • [SLOW TEST:204.909 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":330,"completed":236,"skipped":4192,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:37:35.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 22 01:37:36.008: INFO: Waiting up to 1m0s for all nodes to be ready Mar 22 01:38:36.033: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Mar 22 01:38:36.078: INFO: Created pod: pod0-sched-preemption-low-priority Mar 22 01:38:36.190: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:39:40.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3819" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:124.639 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":330,"completed":237,"skipped":4192,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} S ------------------------------ [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:39:40.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:39:40.589: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes Mar 22 01:39:40.617: INFO: The status of Pod pod-exec-websocket-8a60b102-63d2-4f05-8403-f5db7db4e1d2 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:39:42.622: INFO: The status of Pod pod-exec-websocket-8a60b102-63d2-4f05-8403-f5db7db4e1d2 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:39:44.621: INFO: The status of Pod pod-exec-websocket-8a60b102-63d2-4f05-8403-f5db7db4e1d2 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:39:44.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5643" for this suite. •{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":330,"completed":238,"skipped":4193,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:39:44.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-68542f26-8c4a-4f16-8d19-83bcb706d627 STEP: Creating a pod to test consume configMaps Mar 22 01:39:44.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3" in namespace "projected-5857" to be "Succeeded or Failed" Mar 22 01:39:44.908: INFO: Pod "pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475748ms Mar 22 01:39:46.921: INFO: Pod "pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016053798s Mar 22 01:39:49.054: INFO: Pod "pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148445657s Mar 22 01:39:51.058: INFO: Pod "pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.152438181s STEP: Saw pod success Mar 22 01:39:51.058: INFO: Pod "pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3" satisfied condition "Succeeded or Failed" Mar 22 01:39:51.060: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3 container agnhost-container: STEP: delete the pod Mar 22 01:39:51.139: INFO: Waiting for pod pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3 to disappear Mar 22 01:39:51.157: INFO: Pod pod-projected-configmaps-f0252e6a-7226-49d8-a35e-05d8153b72a3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:39:51.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5857" for this suite. • [SLOW TEST:6.431 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":239,"skipped":4193,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:39:51.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-4beeae3d-5a26-45b1-aaf1-f79f258d2500 STEP: Creating a pod to test consume secrets Mar 22 01:39:51.297: INFO: Waiting up to 5m0s for pod "pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5" in namespace "secrets-6842" to be "Succeeded or Failed" Mar 22 01:39:51.353: INFO: Pod "pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 55.79915ms Mar 22 01:39:53.407: INFO: Pod "pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10951621s Mar 22 01:39:55.416: INFO: Pod "pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5": Phase="Running", Reason="", readiness=true. Elapsed: 4.118911983s Mar 22 01:39:57.421: INFO: Pod "pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.124122966s STEP: Saw pod success Mar 22 01:39:57.421: INFO: Pod "pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5" satisfied condition "Succeeded or Failed" Mar 22 01:39:57.425: INFO: Trying to get logs from node latest-worker pod pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5 container secret-volume-test: STEP: delete the pod Mar 22 01:39:57.478: INFO: Waiting for pod pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5 to disappear Mar 22 01:39:57.489: INFO: Pod pod-secrets-838fe0fe-716f-455c-8196-9644d53a5bf5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:39:57.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6842" for this suite. • [SLOW TEST:6.331 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":240,"skipped":4204,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:39:57.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 22 01:39:57.569: INFO: >>> kubeConfig: /root/.kube/config Mar 22 01:40:01.193: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:40:15.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8297" for this suite. • [SLOW TEST:17.574 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":330,"completed":241,"skipped":4215,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:40:15.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:40:31.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8314" for this suite. • [SLOW TEST:16.346 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":330,"completed":242,"skipped":4220,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:40:31.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 22 01:40:32.602: INFO: Pod name wrapped-volume-race-f2187fbb-f30d-4667-b011-d20416816d27: Found 0 pods out of 5 Mar 22 01:40:37.612: INFO: Pod name wrapped-volume-race-f2187fbb-f30d-4667-b011-d20416816d27: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f2187fbb-f30d-4667-b011-d20416816d27 in namespace emptydir-wrapper-6859, will wait for the garbage collector to delete the pods Mar 22 01:40:55.772: INFO: Deleting ReplicationController wrapped-volume-race-f2187fbb-f30d-4667-b011-d20416816d27 took: 7.114128ms Mar 22 01:40:56.372: INFO: Terminating ReplicationController wrapped-volume-race-f2187fbb-f30d-4667-b011-d20416816d27 pods took: 600.820514ms STEP: Creating RC which spawns configmap-volume pods Mar 22 01:41:35.634: INFO: Pod name wrapped-volume-race-c270f133-6687-4979-960a-e1ee26070ee4: Found 0 pods out of 5 Mar 22 01:41:40.643: INFO: Pod name wrapped-volume-race-c270f133-6687-4979-960a-e1ee26070ee4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c270f133-6687-4979-960a-e1ee26070ee4 in namespace emptydir-wrapper-6859, will wait for the garbage collector to delete the pods Mar 22 01:41:56.789: INFO: Deleting ReplicationController wrapped-volume-race-c270f133-6687-4979-960a-e1ee26070ee4 took: 6.909697ms Mar 22 01:41:57.389: INFO: Terminating ReplicationController wrapped-volume-race-c270f133-6687-4979-960a-e1ee26070ee4 pods took: 600.137517ms STEP: Creating RC which spawns configmap-volume pods Mar 22 01:42:35.826: INFO: Pod name wrapped-volume-race-5ab2b617-5485-40d2-99dc-6dad16fdc6d0: Found 0 pods out of 5 Mar 22 01:42:40.837: INFO: Pod name wrapped-volume-race-5ab2b617-5485-40d2-99dc-6dad16fdc6d0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5ab2b617-5485-40d2-99dc-6dad16fdc6d0 in namespace emptydir-wrapper-6859, will wait for the garbage collector to delete the pods Mar 22 01:42:56.922: INFO: Deleting ReplicationController wrapped-volume-race-5ab2b617-5485-40d2-99dc-6dad16fdc6d0 took: 7.435677ms Mar 22 01:42:57.523: INFO: Terminating ReplicationController wrapped-volume-race-5ab2b617-5485-40d2-99dc-6dad16fdc6d0 pods took: 600.40796ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:43:56.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6859" for this suite. • [SLOW TEST:204.977 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":330,"completed":243,"skipped":4301,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:43:56.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 22 01:43:56.521: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 22 01:43:56.547: INFO: Waiting for terminating namespaces to be deleted... Mar 22 01:43:56.550: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 22 01:43:56.555: INFO: chaos-controller-manager-69c479c674-rdmrr from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.556: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 01:43:56.556: INFO: chaos-daemon-vb9xf from default started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.556: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:43:56.556: INFO: kindnet-l4mzm from kube-system started at 2021-03-22 00:02:51 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.556: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:43:56.556: INFO: kube-proxy-5wvjm from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.556: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:43:56.556: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 22 01:43:56.561: INFO: chaos-daemon-4zjcg from default started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.561: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:43:56.561: INFO: kindnet-7qb7q from kube-system started at 2021-03-22 00:02:52 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.561: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:43:56.561: INFO: kube-proxy-7q92q from kube-system started at 2021-02-19 10:12:05 +0000 UTC (1 container statuses recorded) Mar 22 01:43:56.561: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2523e499-0f04-44bf-bf51-01ce4ad56bba 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.13 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-2523e499-0f04-44bf-bf51-01ce4ad56bba off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2523e499-0f04-44bf-bf51-01ce4ad56bba [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:49:06.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9779" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.515 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":330,"completed":244,"skipped":4319,"failed":16,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:49:06.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:46 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod Mar 22 01:49:12.485: FAIL: Error fetching EndpointSlice for Service endpointslice-7089/example-int-port Unexpected error: <*errors.StatusError | 0xc001f06d20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.hasMatchingEndpointSlices(0x73e8b88, 0xc002a462c0, 0xc002132e28, 0x12, 0xc005a0a040, 0x10, 0x1, 0x1, 0x5, 0x10000c00862e408, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 +0x2fc k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices.func1(0xc00062f650, 0x7f0a9c1adaa8, 0xc000100c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:342 +0x7a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00862e968, 0x2861100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00062f650, 0xc00862e968, 0xc00062f650, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x1bf08eb000, 0xc00862e968, 0x7f0a9c21d728, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices(0x73e8b88, 0xc002a462c0, 0xc002132e28, 0x12, 0xc000c9ef00, 0xc00862f108, 0x1, 0x1, 0x1, 0x1, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:341 +0x153 k8s.io/kubernetes/test/e2e/network.glob..func6.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:317 +0xec9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 E0322 01:49:12.487087 7 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Mar 22 01:49:12.485: Error fetching EndpointSlice for Service endpointslice-7089/example-int-port\nUnexpected error:\n <*errors.StatusError | 0xc001f06d20>: {\n ErrStatus: {\n TypeMeta: {Kind: \"\", APIVersion: \"\"},\n ListMeta: {\n SelfLink: \"\",\n ResourceVersion: \"\",\n Continue: \"\",\n RemainingItemCount: nil,\n },\n Status: \"Failure\",\n Message: \"the server could not find the requested resource\",\n Reason: \"NotFound\",\n Details: {Name: \"\", Group: \"\", Kind: \"\", UID: \"\", Causes: nil, RetryAfterSeconds: 0},\n Code: 404,\n },\n }\n the server could not find the requested resource\noccurred", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go", Line:522, FullStackTrace:"k8s.io/kubernetes/test/e2e/network.hasMatchingEndpointSlices(0x73e8b88, 0xc002a462c0, 0xc002132e28, 0x12, 0xc005a0a040, 0x10, 0x1, 0x1, 0x5, 0x10000c00862e408, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 +0x2fc\nk8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices.func1(0xc00062f650, 0x7f0a9c1adaa8, 0xc000100c00)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:342 +0x7a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00862e968, 0x2861100, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00062f650, 0xc00862e968, 0xc00062f650, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x1bf08eb000, 0xc00862e968, 0x7f0a9c21d728, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices(0x73e8b88, 0xc002a462c0, 0xc002132e28, 0x12, 0xc000c9ef00, 0xc00862f108, 0x1, 0x1, 0x1, 0x1, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:341 +0x153\nk8s.io/kubernetes/test/e2e/network.glob..func6.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:317 +0xec9\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc002c6a180, 0x6d60740)\n\t/usr/local/go/src/testing/testing.go:1194 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1239 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 140 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6714bc0, 0xc000d936c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x6714bc0, 0xc000d936c0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc008f76300, 0x2e7, 0x82e5845, 0x6e, 0x20a, 0xc0011ba900, 0x890) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x5ea69e0, 0x72180e0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc008f76300, 0x2e7, 0xc00862d9d8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc008f76300, 0x2e7, 0xc00862dac0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Fail(0xc008f76000, 0x2d2, 0xc005a0a3b0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc00862dc58, 0x7345b18, 0x99518a8, 0x0, 0xc00862de28, 0x3, 0x3, 0xc001f06d20) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc00862dc58, 0x7345b18, 0x99518a8, 0xc00862de28, 0x3, 0x3, 0xc000100c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x72e4880, 0xc001f06d20, 0xc00862de28, 0x3, 0x3) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xe7 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/network.hasMatchingEndpointSlices(0x73e8b88, 0xc002a462c0, 0xc002132e28, 0x12, 0xc005a0a040, 0x10, 0x1, 0x1, 0x5, 0x10000c00862e408, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 +0x2fc k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices.func1(0xc00062f650, 0x7f0a9c1adaa8, 0xc000100c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:342 +0x7a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00862e968, 0x2861100, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc00062f650, 0xc00862e968, 0xc00062f650, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x1bf08eb000, 0xc00862e968, 0x7f0a9c21d728, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.expectEndpointsAndSlices(0x73e8b88, 0xc002a462c0, 0xc002132e28, 0x12, 0xc000c9ef00, 0xc00862f108, 0x1, 0x1, 0x1, 0x1, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:341 +0x153 k8s.io/kubernetes/test/e2e/network.glob..func6.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:317 +0xec9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001346300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001346300, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001306ac0, 0x72e1260, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002e830e0, 0x0, 0x72e1260, 0xc0000ba840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002e830e0, 0x72e1260, 0xc0000ba840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003504140, 0xc002e830e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003504140, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003504140, 0xc0035000a8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0000c6280, 0x7f0a9c18a880, 0xc002c6a180, 0x6b8fab1, 0x14, 0xc001bdb470, 0x3, 0x3, 0x7391178, 0xc0000ba840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x72e60e0, 0xc002c6a180, 0x6b8fab1, 0x14, 0xc001544b00, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x72e60e0, 0xc002c6a180, 0x6b8fab1, 0x14, 0xc00094a6e0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "endpointslice-7089". STEP: Found 8 events. Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:07 +0000 UTC - event for pod1: {default-scheduler } Scheduled: Successfully assigned endpointslice-7089/pod1 to latest-worker Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:07 +0000 UTC - event for pod2: {default-scheduler } Scheduled: Successfully assigned endpointslice-7089/pod2 to latest-worker Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:08 +0000 UTC - event for pod1: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/nginx:1.14-1" already present on machine Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:08 +0000 UTC - event for pod2: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/nginx:1.14-1" already present on machine Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:10 +0000 UTC - event for pod1: {kubelet latest-worker} Created: Created container container1 Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:10 +0000 UTC - event for pod1: {kubelet latest-worker} Started: Started container container1 Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:10 +0000 UTC - event for pod2: {kubelet latest-worker} Created: Created container container1 Mar 22 01:49:12.501: INFO: At 2021-03-22 01:49:10 +0000 UTC - event for pod2: {kubelet latest-worker} Started: Started container container1 Mar 22 01:49:12.512: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:49:12.513: INFO: pod1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:06 +0000 UTC }] Mar 22 01:49:12.513: INFO: pod2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-22 01:49:07 +0000 UTC }] Mar 22 01:49:12.513: INFO: Mar 22 01:49:12.621: INFO: Logging node info for node latest-control-plane Mar 22 01:49:12.662: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7016197 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:44:50 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:44:50 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:44:50 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:44:50 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:49:12.663: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:49:12.693: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:49:12.794: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container coredns ready: true, restart count 0 Mar 22 01:49:12.794: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 01:49:12.794: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:49:12.794: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 01:49:12.794: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:49:12.794: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:49:12.794: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container etcd ready: true, restart count 0 Mar 22 01:49:12.794: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:49:12.794: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:12.794: INFO: Container coredns ready: true, restart count 0 W0322 01:49:12.809936 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:49:13.285: INFO: Latency metrics for node latest-control-plane Mar 22 01:49:13.285: INFO: Logging node info for node latest-worker Mar 22 01:49:13.288: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7016638 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 01:38:36 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}} {kubelet Update v1 2021-03-22 01:38:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:49:13.289: INFO: Logging kubelet events for node latest-worker Mar 22 01:49:13.292: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:49:13.347: INFO: pod1 started at 2021-03-22 01:49:07 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.347: INFO: Container container1 ready: true, restart count 0 Mar 22 01:49:13.347: INFO: pod2 started at 2021-03-22 01:49:07 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.347: INFO: Container container1 ready: true, restart count 0 Mar 22 01:49:13.347: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.347: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:49:13.347: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.347: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:49:13.347: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.347: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:49:13.347: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.347: INFO: Container chaos-mesh ready: true, restart count 0 W0322 01:49:13.354527 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:49:13.604: INFO: Latency metrics for node latest-worker Mar 22 01:49:13.604: INFO: Logging node info for node latest-worker2 Mar 22 01:49:13.609: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7016689 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 01:38:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-22 01:44:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:48:40 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:49:13.610: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:49:13.612: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:49:13.637: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.638: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:49:13.638: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.638: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:49:13.638: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.638: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 01:49:13.638: INFO: pod4 started at 2021-03-22 01:44:00 +0000 UTC (0+1 container statuses recorded) Mar 22 01:49:13.638: INFO: Container agnhost ready: true, restart count 0 W0322 01:49:13.643161 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:49:13.860: INFO: Latency metrics for node latest-worker2 Mar 22 01:49:13.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-7089" for this suite. • Failure [6.957 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:49:12.485: Error fetching EndpointSlice for Service endpointslice-7089/example-int-port Unexpected error: <*errors.StatusError | 0xc001f06d20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 ------------------------------ {"msg":"FAILED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":330,"completed":244,"skipped":4356,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:49:13.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 01:49:18.021: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:49:18.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7211" for this suite. •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":330,"completed":245,"skipped":4361,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:49:18.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:49:19.067: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:49:21.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974559, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974559, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974559, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974558, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:49:24.796: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:49:24.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1851" for this suite. STEP: Destroying namespace "webhook-1851-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.191 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":330,"completed":246,"skipped":4390,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:49:25.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:49:25.464: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1a181aca-ad29-44ab-8d8f-cec8ff3e47f2", Controller:(*bool)(0xc00519bab2), BlockOwnerDeletion:(*bool)(0xc00519bab3)}} Mar 22 01:49:25.503: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"33e5b306-801d-4867-ae88-4f4b64ebd482", Controller:(*bool)(0xc0058ca902), BlockOwnerDeletion:(*bool)(0xc0058ca903)}} Mar 22 01:49:25.619: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c3ea8af7-35a3-43f1-8e62-36f9e74da19c", Controller:(*bool)(0xc0058cab1a), BlockOwnerDeletion:(*bool)(0xc0058cab1b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:49:30.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7124" for this suite. • [SLOW TEST:5.392 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":330,"completed":247,"skipped":4408,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:49:30.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:49:30.734: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 22 01:49:34.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-671 --namespace=crd-publish-openapi-671 create -f -' Mar 22 01:49:38.455: INFO: stderr: "" Mar 22 01:49:38.455: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 22 01:49:38.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-671 --namespace=crd-publish-openapi-671 delete e2e-test-crd-publish-openapi-6276-crds test-cr' Mar 22 01:49:38.573: INFO: stderr: "" Mar 22 01:49:38.573: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 22 01:49:38.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-671 --namespace=crd-publish-openapi-671 apply -f -' Mar 22 01:49:38.898: INFO: stderr: "" Mar 22 01:49:38.898: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 22 01:49:38.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-671 --namespace=crd-publish-openapi-671 delete e2e-test-crd-publish-openapi-6276-crds test-cr' Mar 22 01:49:39.022: INFO: stderr: "" Mar 22 01:49:39.022: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 22 01:49:39.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-671 explain e2e-test-crd-publish-openapi-6276-crds' Mar 22 01:49:39.334: INFO: stderr: "" Mar 22 01:49:39.334: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6276-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:49:42.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-671" for this suite. • [SLOW TEST:12.259 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":330,"completed":248,"skipped":4442,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:49:42.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6418 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-6418 Mar 22 01:49:43.033: INFO: Found 0 stateful pods, waiting for 1 Mar 22 01:49:53.038: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 Mar 22 01:49:53.157: INFO: Deleting all statefulset in ns statefulset-6418 Mar 22 01:49:53.218: INFO: Scaling statefulset ss to 0 Mar 22 01:50:23.266: INFO: Waiting for statefulset status.replicas updated to 0 Mar 22 01:50:23.269: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:50:23.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6418" for this suite. • [SLOW TEST:40.371 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":330,"completed":249,"skipped":4461,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:50:23.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 22 01:50:23.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017198 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:50:23.398: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017198 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 22 01:50:33.408: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017242 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:50:33.409: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017242 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 22 01:50:43.419: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017263 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:50:43.420: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017263 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 22 01:50:53.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017283 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:50:53.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2371 8c7bac54-797f-4441-9660-e4c51124bb4b 7017283 0 2021-03-22 01:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-03-22 01:50:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 22 01:51:03.436: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2371 4baa1292-1fff-4342-b639-1739c7741b1d 7017303 0 2021-03-22 01:51:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-22 01:51:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:51:03.436: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2371 4baa1292-1fff-4342-b639-1739c7741b1d 7017303 0 2021-03-22 01:51:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-22 01:51:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 22 01:51:13.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2371 4baa1292-1fff-4342-b639-1739c7741b1d 7017323 0 2021-03-22 01:51:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-22 01:51:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 01:51:13.444: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2371 4baa1292-1fff-4342-b639-1739c7741b1d 7017323 0 2021-03-22 01:51:03 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-03-22 01:51:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:51:23.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2371" for this suite. • [SLOW TEST:60.164 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":330,"completed":250,"skipped":4463,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:51:23.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 01:51:28.005: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:51:28.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8965" for this suite. •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":330,"completed":251,"skipped":4468,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:51:28.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod Mar 22 01:51:28.294: INFO: The status of Pod labelsupdate0dc9bb60-14b6-4e06-ad7d-be11529b2fae is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:51:30.430: INFO: The status of Pod labelsupdate0dc9bb60-14b6-4e06-ad7d-be11529b2fae is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:51:32.299: INFO: The status of Pod labelsupdate0dc9bb60-14b6-4e06-ad7d-be11529b2fae is Running (Ready = true) Mar 22 01:51:32.896: INFO: Successfully updated pod "labelsupdate0dc9bb60-14b6-4e06-ad7d-be11529b2fae" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:51:36.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4199" for this suite. • [SLOW TEST:8.818 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":330,"completed":252,"skipped":4496,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:51:36.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod Mar 22 01:51:37.009: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:51:49.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6675" for this suite. • [SLOW TEST:12.979 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":330,"completed":253,"skipped":4550,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:51:49.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 22 01:51:50.025: INFO: >>> kubeConfig: /root/.kube/config Mar 22 01:51:53.619: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:52:06.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3808" for this suite. • [SLOW TEST:16.118 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":330,"completed":254,"skipped":4579,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:52:06.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 22 01:52:06.214: INFO: Waiting up to 1m0s for all nodes to be ready Mar 22 01:53:06.240: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Mar 22 01:53:06.269: INFO: Created pod: pod0-sched-preemption-low-priority Mar 22 01:53:06.313: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:53:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2470" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:94.486 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":330,"completed":255,"skipped":4596,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:53:40.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-4mjn STEP: Creating a pod to test atomic-volume-subpath Mar 22 01:53:40.981: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4mjn" in namespace "subpath-6420" to be "Succeeded or Failed" Mar 22 01:53:40.984: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365822ms Mar 22 01:53:42.988: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007176618s Mar 22 01:53:44.994: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012998144s Mar 22 01:53:46.998: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 6.017008322s Mar 22 01:53:49.002: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 8.021296376s Mar 22 01:53:51.008: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 10.027005918s Mar 22 01:53:53.014: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 12.033194074s Mar 22 01:53:55.024: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 14.043345912s Mar 22 01:53:57.029: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 16.048459112s Mar 22 01:53:59.035: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 18.054037005s Mar 22 01:54:01.040: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 20.059335666s Mar 22 01:54:03.069: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 22.087918419s Mar 22 01:54:05.074: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Running", Reason="", readiness=true. Elapsed: 24.09304098s Mar 22 01:54:07.080: INFO: Pod "pod-subpath-test-projected-4mjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.098828856s STEP: Saw pod success Mar 22 01:54:07.080: INFO: Pod "pod-subpath-test-projected-4mjn" satisfied condition "Succeeded or Failed" Mar 22 01:54:07.084: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-4mjn container test-container-subpath-projected-4mjn: STEP: delete the pod Mar 22 01:54:07.139: INFO: Waiting for pod pod-subpath-test-projected-4mjn to disappear Mar 22 01:54:07.145: INFO: Pod pod-subpath-test-projected-4mjn no longer exists STEP: Deleting pod pod-subpath-test-projected-4mjn Mar 22 01:54:07.145: INFO: Deleting pod "pod-subpath-test-projected-4mjn" in namespace "subpath-6420" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:54:07.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6420" for this suite. • [SLOW TEST:26.637 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":330,"completed":256,"skipped":4603,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:54:07.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:54:07.841: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:54:10.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974847, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974847, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974848, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974847, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:54:13.357: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:54:13.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:54:14.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1680" for this suite. STEP: Destroying namespace "webhook-1680-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.589 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":330,"completed":257,"skipped":4614,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:54:14.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-9550 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 22 01:54:14.892: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 22 01:54:14.994: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:54:16.998: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:54:19.006: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:54:20.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 01:54:22.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 01:54:24.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 01:54:27.000: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 01:54:29.000: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 01:54:30.999: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 22 01:54:33.001: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 22 01:54:33.008: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 22 01:54:35.014: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 22 01:54:41.084: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Mar 22 01:54:41.084: INFO: Going to poll 10.244.2.248 on port 8081 at least 0 times, with a maximum of 34 tries before failing Mar 22 01:54:41.086: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.248 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9550 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 01:54:41.086: INFO: >>> kubeConfig: /root/.kube/config Mar 22 01:54:42.226: INFO: Found all 1 expected endpoints: [netserver-0] Mar 22 01:54:42.226: INFO: Going to poll 10.244.1.6 on port 8081 at least 0 times, with a maximum of 34 tries before failing Mar 22 01:54:42.231: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.6 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9550 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 01:54:42.231: INFO: >>> kubeConfig: /root/.kube/config Mar 22 01:54:43.339: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:54:43.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9550" for this suite. • [SLOW TEST:28.587 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":258,"skipped":4622,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:54:43.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:55:01.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2418" for this suite. • [SLOW TEST:17.876 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":330,"completed":259,"skipped":4628,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:55:01.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 22 01:55:02.147: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 22 01:55:05.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 01:55:07.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751974902, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8977db\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 01:55:10.989: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:55:21.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5256" for this suite. STEP: Destroying namespace "webhook-5256-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.038 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":330,"completed":260,"skipped":4631,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:55:21.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Mar 22 01:55:21.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7957 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' Mar 22 01:55:21.497: INFO: stderr: "" Mar 22 01:55:21.497: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 Mar 22 01:55:21.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7957 delete pods e2e-test-httpd-pod' Mar 22 01:55:35.314: INFO: stderr: "" Mar 22 01:55:35.314: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:55:35.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7957" for this suite. • [SLOW TEST:14.083 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":330,"completed":261,"skipped":4633,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:55:35.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 01:55:35.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c" in namespace "projected-7625" to be "Succeeded or Failed" Mar 22 01:55:35.447: INFO: Pod "downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174601ms Mar 22 01:55:37.451: INFO: Pod "downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007818474s Mar 22 01:55:39.456: INFO: Pod "downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012701762s STEP: Saw pod success Mar 22 01:55:39.456: INFO: Pod "downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c" satisfied condition "Succeeded or Failed" Mar 22 01:55:39.459: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c container client-container: STEP: delete the pod Mar 22 01:55:39.652: INFO: Waiting for pod downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c to disappear Mar 22 01:55:39.657: INFO: Pod downwardapi-volume-4f34e147-bd0e-44e3-8068-32086541521c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:55:39.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7625" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":330,"completed":262,"skipped":4667,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:55:39.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 22 01:55:39.785: INFO: Waiting up to 5m0s for pod "pod-472815fc-50d2-4ccb-8299-5262cccc8490" in namespace "emptydir-6475" to be "Succeeded or Failed" Mar 22 01:55:39.814: INFO: Pod "pod-472815fc-50d2-4ccb-8299-5262cccc8490": Phase="Pending", Reason="", readiness=false. Elapsed: 29.35132ms Mar 22 01:55:41.818: INFO: Pod "pod-472815fc-50d2-4ccb-8299-5262cccc8490": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033084588s Mar 22 01:55:43.822: INFO: Pod "pod-472815fc-50d2-4ccb-8299-5262cccc8490": Phase="Running", Reason="", readiness=true. Elapsed: 4.037534927s Mar 22 01:55:45.827: INFO: Pod "pod-472815fc-50d2-4ccb-8299-5262cccc8490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042445417s STEP: Saw pod success Mar 22 01:55:45.827: INFO: Pod "pod-472815fc-50d2-4ccb-8299-5262cccc8490" satisfied condition "Succeeded or Failed" Mar 22 01:55:45.831: INFO: Trying to get logs from node latest-worker pod pod-472815fc-50d2-4ccb-8299-5262cccc8490 container test-container: STEP: delete the pod Mar 22 01:55:45.874: INFO: Waiting for pod pod-472815fc-50d2-4ccb-8299-5262cccc8490 to disappear Mar 22 01:55:45.900: INFO: Pod pod-472815fc-50d2-4ccb-8299-5262cccc8490 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:55:45.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6475" for this suite. • [SLOW TEST:6.245 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":263,"skipped":4672,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:55:45.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-7030/configmap-test-918ef390-c963-4eb5-8447-079d54a5541a STEP: Creating a pod to test consume configMaps Mar 22 01:55:46.070: INFO: Waiting up to 5m0s for pod "pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c" in namespace "configmap-7030" to be "Succeeded or Failed" Mar 22 01:55:46.109: INFO: Pod "pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.634239ms Mar 22 01:55:48.114: INFO: Pod "pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044514039s Mar 22 01:55:50.119: INFO: Pod "pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049615439s STEP: Saw pod success Mar 22 01:55:50.119: INFO: Pod "pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c" satisfied condition "Succeeded or Failed" Mar 22 01:55:50.122: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c container env-test: STEP: delete the pod Mar 22 01:55:50.250: INFO: Waiting for pod pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c to disappear Mar 22 01:55:50.302: INFO: Pod pod-configmaps-7727ea2d-9c4e-4c1e-8c98-ad03aa006c3c no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:55:50.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7030" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":330,"completed":264,"skipped":4694,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:55:50.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:56:01.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9042" for this suite. • [SLOW TEST:11.196 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":330,"completed":265,"skipped":4716,"failed":17,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]"]} SSSSSSSSS ------------------------------ [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:56:01.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob Mar 22 01:56:01.635: FAIL: Failed to create CronJob in namespace cronjob-5972 Unexpected error: <*errors.StatusError | 0xc00367ba40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:106 +0x231 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "cronjob-5972". STEP: Found 0 events. Mar 22 01:56:01.647: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 01:56:01.647: INFO: Mar 22 01:56:01.650: INFO: Logging node info for node latest-control-plane Mar 22 01:56:01.652: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7018124 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:54:51 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:54:51 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:54:51 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:54:51 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:56:01.653: INFO: Logging kubelet events for node latest-control-plane Mar 22 01:56:01.655: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 01:56:01.682: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container etcd ready: true, restart count 0 Mar 22 01:56:01.682: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:56:01.682: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container coredns ready: true, restart count 0 Mar 22 01:56:01.682: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 01:56:01.682: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 01:56:01.682: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 01:56:01.682: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 01:56:01.682: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:56:01.682: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.682: INFO: Container coredns ready: true, restart count 0 W0322 01:56:01.687742 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:56:01.794: INFO: Latency metrics for node latest-control-plane Mar 22 01:56:01.794: INFO: Logging node info for node latest-worker Mar 22 01:56:01.798: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7017799 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 01:53:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:56:01.799: INFO: Logging kubelet events for node latest-worker Mar 22 01:56:01.803: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 01:56:01.810: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.810: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:56:01.810: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.810: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:56:01.810: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.810: INFO: Container chaos-mesh ready: true, restart count 0 Mar 22 01:56:01.810: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:01.810: INFO: Container chaos-daemon ready: true, restart count 0 W0322 01:56:01.815052 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:56:02.035: INFO: Latency metrics for node latest-worker Mar 22 01:56:02.035: INFO: Logging node info for node latest-worker2 Mar 22 01:56:02.040: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7017798 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 01:44:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 01:53:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:53:41 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 01:56:02.041: INFO: Logging kubelet events for node latest-worker2 Mar 22 01:56:02.044: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 01:56:02.051: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:02.051: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 01:56:02.051: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:02.051: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 01:56:02.051: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 01:56:02.051: INFO: Container chaos-daemon ready: true, restart count 0 W0322 01:56:02.076407 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 01:56:02.299: INFO: Latency metrics for node latest-worker2 Mar 22 01:56:02.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5972" for this suite. • Failure [0.801 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:56:01.635: Failed to create CronJob in namespace cronjob-5972 Unexpected error: <*errors.StatusError | 0xc00367ba40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "the server could not find the requested resource", Reason: "NotFound", Details: {Name: "", Group: "", Kind: "", UID: "", Causes: nil, RetryAfterSeconds: 0}, Code: 404, }, } the server could not find the requested resource occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:106 ------------------------------ {"msg":"FAILED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":330,"completed":265,"skipped":4725,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:56:02.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created Mar 22 01:56:02.406: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:56:04.413: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:56:06.412: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:56:07.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2709" for this suite. • [SLOW TEST:5.138 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":330,"completed":266,"skipped":4739,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:56:07.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:56:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6338" for this suite. • [SLOW TEST:16.082 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":330,"completed":267,"skipped":4752,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:56:23.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5845 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5845;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5845 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5845;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5845.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5845.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5845.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5845.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5845.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5845.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5845.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.37_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5845 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5845;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5845 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5845;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5845.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5845.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5845.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5845.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5845.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5845.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5845.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5845.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5845.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.59.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.59.37_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 01:56:31.879: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.882: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.889: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.892: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.896: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.913: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.916: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.938: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.941: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.944: INFO: Unable to read jessie_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.949: INFO: Unable to read jessie_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.952: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.954: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.957: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:31.974: INFO: Lookups using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5845 wheezy_tcp@dns-test-service.dns-5845 wheezy_udp@dns-test-service.dns-5845.svc wheezy_tcp@dns-test-service.dns-5845.svc wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5845 jessie_tcp@dns-test-service.dns-5845 jessie_udp@dns-test-service.dns-5845.svc jessie_tcp@dns-test-service.dns-5845.svc jessie_udp@_http._tcp.dns-test-service.dns-5845.svc jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc] Mar 22 01:56:36.978: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.981: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.990: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.995: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:36.998: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.017: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.019: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.023: INFO: Unable to read jessie_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.029: INFO: Unable to read jessie_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.032: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.035: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.038: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:37.056: INFO: Lookups using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5845 wheezy_tcp@dns-test-service.dns-5845 wheezy_udp@dns-test-service.dns-5845.svc wheezy_tcp@dns-test-service.dns-5845.svc wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5845 jessie_tcp@dns-test-service.dns-5845 jessie_udp@dns-test-service.dns-5845.svc jessie_tcp@dns-test-service.dns-5845.svc jessie_udp@_http._tcp.dns-test-service.dns-5845.svc jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc] Mar 22 01:56:41.979: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:41.983: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:41.986: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.008: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.012: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.014: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.017: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.020: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.039: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.042: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.046: INFO: Unable to read jessie_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.049: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.052: INFO: Unable to read jessie_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.054: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.058: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.061: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:42.078: INFO: Lookups using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5845 wheezy_tcp@dns-test-service.dns-5845 wheezy_udp@dns-test-service.dns-5845.svc wheezy_tcp@dns-test-service.dns-5845.svc wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5845 jessie_tcp@dns-test-service.dns-5845 jessie_udp@dns-test-service.dns-5845.svc jessie_tcp@dns-test-service.dns-5845.svc jessie_udp@_http._tcp.dns-test-service.dns-5845.svc jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc] Mar 22 01:56:46.979: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.983: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.986: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.989: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.993: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.996: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:46.998: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.020: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.023: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.026: INFO: Unable to read jessie_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.033: INFO: Unable to read jessie_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.036: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.040: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.043: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:47.061: INFO: Lookups using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5845 wheezy_tcp@dns-test-service.dns-5845 wheezy_udp@dns-test-service.dns-5845.svc wheezy_tcp@dns-test-service.dns-5845.svc wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5845 jessie_tcp@dns-test-service.dns-5845 jessie_udp@dns-test-service.dns-5845.svc jessie_tcp@dns-test-service.dns-5845.svc jessie_udp@_http._tcp.dns-test-service.dns-5845.svc jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc] Mar 22 01:56:51.978: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.981: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.986: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.989: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.994: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:51.997: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.013: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.016: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.018: INFO: Unable to read jessie_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.022: INFO: Unable to read jessie_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.027: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:52.111: INFO: Lookups using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5845 wheezy_tcp@dns-test-service.dns-5845 wheezy_udp@dns-test-service.dns-5845.svc wheezy_tcp@dns-test-service.dns-5845.svc wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5845 jessie_tcp@dns-test-service.dns-5845 jessie_udp@dns-test-service.dns-5845.svc jessie_tcp@dns-test-service.dns-5845.svc jessie_udp@_http._tcp.dns-test-service.dns-5845.svc jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc] Mar 22 01:56:56.978: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.981: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.983: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.985: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.987: INFO: Unable to read wheezy_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.990: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.992: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:56.995: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.016: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.018: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.020: INFO: Unable to read jessie_udp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.022: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845 from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.025: INFO: Unable to read jessie_udp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.029: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.031: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc from pod dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7: the server could not find the requested resource (get pods dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7) Mar 22 01:56:57.048: INFO: Lookups using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5845 wheezy_tcp@dns-test-service.dns-5845 wheezy_udp@dns-test-service.dns-5845.svc wheezy_tcp@dns-test-service.dns-5845.svc wheezy_udp@_http._tcp.dns-test-service.dns-5845.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5845.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5845 jessie_tcp@dns-test-service.dns-5845 jessie_udp@dns-test-service.dns-5845.svc jessie_tcp@dns-test-service.dns-5845.svc jessie_udp@_http._tcp.dns-test-service.dns-5845.svc jessie_tcp@_http._tcp.dns-test-service.dns-5845.svc] Mar 22 01:57:02.058: INFO: DNS probes using dns-5845/dns-test-db80272a-06f7-4b9a-ba37-e321716dc9a7 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:57:02.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5845" for this suite. • [SLOW TEST:39.393 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":330,"completed":268,"skipped":4757,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:57:02.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath Mar 22 01:57:02.981: INFO: Waiting up to 5m0s for pod "var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2" in namespace "var-expansion-5648" to be "Succeeded or Failed" Mar 22 01:57:03.033: INFO: Pod "var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 52.192108ms Mar 22 01:57:05.051: INFO: Pod "var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069543775s Mar 22 01:57:07.055: INFO: Pod "var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2": Phase="Running", Reason="", readiness=true. Elapsed: 4.074168716s Mar 22 01:57:09.068: INFO: Pod "var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087417119s STEP: Saw pod success Mar 22 01:57:09.069: INFO: Pod "var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2" satisfied condition "Succeeded or Failed" Mar 22 01:57:09.071: INFO: Trying to get logs from node latest-worker2 pod var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2 container dapi-container: STEP: delete the pod Mar 22 01:57:09.101: INFO: Waiting for pod var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2 to disappear Mar 22 01:57:09.134: INFO: Pod var-expansion-3c69914f-94da-47fb-93f1-28f04133f2a2 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:57:09.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5648" for this suite. • [SLOW TEST:6.220 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":330,"completed":269,"skipped":4883,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:57:09.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:57:09.252: INFO: Creating ReplicaSet my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63 Mar 22 01:57:09.262: INFO: Pod name my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63: Found 0 pods out of 1 Mar 22 01:57:14.265: INFO: Pod name my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63: Found 1 pods out of 1 Mar 22 01:57:14.265: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63" is running Mar 22 01:57:14.268: INFO: Pod "my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63-ljfxm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:57:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:57:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:57:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-03-22 01:57:09 +0000 UTC Reason: Message:}]) Mar 22 01:57:14.269: INFO: Trying to dial the pod Mar 22 01:57:19.282: INFO: Controller my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63: Got expected result from replica 1 [my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63-ljfxm]: "my-hostname-basic-ab718dab-c0be-4bd3-9b59-7d9794709a63-ljfxm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:57:19.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2128" for this suite. • [SLOW TEST:10.148 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":330,"completed":270,"skipped":4884,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:57:19.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8419.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8419.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8419.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8419.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8419.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8419.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8419.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 22 01:57:27.548: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:27.552: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:27.556: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:27.560: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:27.570: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:27.573: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:27.597: INFO: Lookups using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8419.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local] Mar 22 01:57:32.604: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:32.608: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:32.623: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:32.626: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:32.637: INFO: Lookups using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local] Mar 22 01:57:37.603: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:37.607: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:37.624: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:37.627: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:37.637: INFO: Lookups using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local] Mar 22 01:57:42.603: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:42.607: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:42.625: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:42.629: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:42.640: INFO: Lookups using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local] Mar 22 01:57:47.603: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:47.607: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:47.625: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:47.628: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:47.648: INFO: Lookups using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local] Mar 22 01:57:52.669: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:52.672: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:52.687: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:52.690: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local from pod dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e: the server could not find the requested resource (get pods dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e) Mar 22 01:57:52.700: INFO: Lookups using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8419.svc.cluster.local] Mar 22 01:57:57.641: INFO: DNS probes using dns-8419/dns-test-682fafd8-19fe-4cb9-b08d-5a45a4add01e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:57:58.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8419" for this suite. • [SLOW TEST:39.213 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":330,"completed":271,"skipped":4899,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:57:58.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. Mar 22 01:57:58.672: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:58:00.675: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:58:02.676: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook Mar 22 01:58:02.690: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:58:04.759: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Mar 22 01:58:06.696: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook Mar 22 01:58:06.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:06.713: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:08.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:08.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:10.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:10.719: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:12.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:12.719: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:14.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:14.722: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:16.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:16.723: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:18.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:18.719: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:20.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:20.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:22.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:22.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:24.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:24.725: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:26.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:26.720: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:28.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:28.722: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:30.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:30.722: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:32.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:32.719: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:34.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:34.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:36.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:36.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:38.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:38.720: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:40.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:40.719: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:42.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:42.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:44.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:44.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:46.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:46.718: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:48.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:48.717: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:50.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:50.720: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:52.713: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:52.717: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:54.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:54.719: INFO: Pod pod-with-prestop-http-hook still exists Mar 22 01:58:56.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 22 01:58:56.722: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:58:56.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9863" for this suite. • [SLOW TEST:58.246 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":330,"completed":272,"skipped":4900,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:58:56.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 22 01:58:56.884: INFO: Waiting up to 5m0s for pod "pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe" in namespace "emptydir-1521" to be "Succeeded or Failed" Mar 22 01:58:56.906: INFO: Pod "pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe": Phase="Pending", Reason="", readiness=false. Elapsed: 21.767467ms Mar 22 01:58:58.943: INFO: Pod "pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05860788s Mar 22 01:59:00.948: INFO: Pod "pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064504251s STEP: Saw pod success Mar 22 01:59:00.949: INFO: Pod "pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe" satisfied condition "Succeeded or Failed" Mar 22 01:59:00.953: INFO: Trying to get logs from node latest-worker2 pod pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe container test-container: STEP: delete the pod Mar 22 01:59:00.994: INFO: Waiting for pod pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe to disappear Mar 22 01:59:01.001: INFO: Pod pod-67eb9453-0dc5-4da6-b1a3-b8aee3b619fe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:59:01.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1521" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":273,"skipped":4952,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} S ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:59:01.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 01:59:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4535" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":330,"completed":274,"skipped":4953,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 01:59:01.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 01:59:01.324: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 22 01:59:01.338: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:01.340: INFO: Number of nodes with available pods: 0 Mar 22 01:59:01.340: INFO: Node latest-worker is running more than one daemon pod Mar 22 01:59:02.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:02.422: INFO: Number of nodes with available pods: 0 Mar 22 01:59:02.422: INFO: Node latest-worker is running more than one daemon pod Mar 22 01:59:03.407: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:03.412: INFO: Number of nodes with available pods: 0 Mar 22 01:59:03.412: INFO: Node latest-worker is running more than one daemon pod Mar 22 01:59:04.360: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:04.363: INFO: Number of nodes with available pods: 0 Mar 22 01:59:04.363: INFO: Node latest-worker is running more than one daemon pod Mar 22 01:59:05.346: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:05.349: INFO: Number of nodes with available pods: 0 Mar 22 01:59:05.349: INFO: Node latest-worker is running more than one daemon pod Mar 22 01:59:06.402: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:06.419: INFO: Number of nodes with available pods: 2 Mar 22 01:59:06.419: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 22 01:59:06.491: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:06.491: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:06.561: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:07.565: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:07.565: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:07.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:08.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:08.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:08.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:09.583: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:09.583: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:09.588: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:10.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:10.566: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:10.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:10.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:11.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:11.566: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:11.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:11.578: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:12.569: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:12.569: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:12.569: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:12.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:13.586: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:13.586: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:13.586: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:13.590: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:14.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:14.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:14.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:14.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:15.565: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:15.565: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:15.565: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:15.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:16.580: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:16.581: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:16.581: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:16.590: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:17.565: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:17.565: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:17.565: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:17.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:18.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:18.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:18.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:18.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:19.571: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:19.571: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:19.571: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:19.575: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:20.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:20.566: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:20.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:20.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:21.569: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:21.569: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:21.569: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:21.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:22.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:22.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:22.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:22.605: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:23.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:23.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:23.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:23.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:24.565: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:24.565: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:24.565: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:24.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:25.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:25.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:25.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:25.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:26.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:26.566: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:26.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:26.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:27.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:27.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:27.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:27.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:28.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:28.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:28.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:28.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:29.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:29.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:29.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:29.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:30.572: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:30.572: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:30.572: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:30.577: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:31.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:31.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:31.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:31.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:32.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:32.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:32.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:32.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:33.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:33.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:33.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:33.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:34.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:34.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:34.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:34.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:35.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:35.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:35.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:35.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:36.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:36.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:36.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:36.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:37.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:37.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:37.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:37.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:38.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:38.566: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:38.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:38.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:39.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:39.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:39.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:39.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:40.572: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:40.572: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:40.572: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:40.578: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:41.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:41.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:41.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:41.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:42.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:42.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:42.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:42.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:43.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:43.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:43.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:43.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:44.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:44.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:44.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:44.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:45.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:45.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:45.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:45.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:46.566: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:46.566: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:46.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:46.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:47.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:47.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:47.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:47.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:48.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:48.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:48.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:48.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:49.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:49.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:49.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:49.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:50.603: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:50.603: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:50.603: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:50.609: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:51.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:51.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:51.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:51.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:52.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:52.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:52.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:52.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:53.567: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:53.567: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:53.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:53.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:54.568: INFO: Wrong image for pod: daemon-set-8mckl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:54.568: INFO: Pod daemon-set-8mckl is not available Mar 22 01:59:54.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:54.574: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:55.568: INFO: Pod daemon-set-2zxm6 is not available Mar 22 01:59:55.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:55.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:56.566: INFO: Pod daemon-set-2zxm6 is not available Mar 22 01:59:56.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:56.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:57.566: INFO: Pod daemon-set-2zxm6 is not available Mar 22 01:59:57.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:57.570: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:58.566: INFO: Pod daemon-set-2zxm6 is not available Mar 22 01:59:58.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:58.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 01:59:59.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 01:59:59.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:00.566: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 02:00:00.566: INFO: Pod daemon-set-xx625 is not available Mar 22 02:00:00.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:01.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 02:00:01.567: INFO: Pod daemon-set-xx625 is not available Mar 22 02:00:01.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:02.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 02:00:02.567: INFO: Pod daemon-set-xx625 is not available Mar 22 02:00:02.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:03.567: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 02:00:03.567: INFO: Pod daemon-set-xx625 is not available Mar 22 02:00:03.571: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:04.568: INFO: Wrong image for pod: daemon-set-xx625. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.28, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Mar 22 02:00:04.568: INFO: Pod daemon-set-xx625 is not available Mar 22 02:00:04.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:05.567: INFO: Pod daemon-set-b5g2w is not available Mar 22 02:00:05.573: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 22 02:00:05.579: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:05.582: INFO: Number of nodes with available pods: 1 Mar 22 02:00:05.582: INFO: Node latest-worker is running more than one daemon pod Mar 22 02:00:06.589: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:06.592: INFO: Number of nodes with available pods: 1 Mar 22 02:00:06.592: INFO: Node latest-worker is running more than one daemon pod Mar 22 02:00:07.589: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:07.593: INFO: Number of nodes with available pods: 1 Mar 22 02:00:07.593: INFO: Node latest-worker is running more than one daemon pod Mar 22 02:00:08.589: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:08.593: INFO: Number of nodes with available pods: 1 Mar 22 02:00:08.593: INFO: Node latest-worker is running more than one daemon pod Mar 22 02:00:09.589: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 22 02:00:09.593: INFO: Number of nodes with available pods: 2 Mar 22 02:00:09.593: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1320, will wait for the garbage collector to delete the pods Mar 22 02:00:09.673: INFO: Deleting DaemonSet.extensions daemon-set took: 9.862617ms Mar 22 02:00:10.274: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.777929ms Mar 22 02:00:55.078: INFO: Number of nodes with available pods: 0 Mar 22 02:00:55.078: INFO: Number of running nodes: 0, number of available pods: 0 Mar 22 02:00:55.080: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7019550"},"items":null} Mar 22 02:00:55.082: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7019550"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:00:55.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1320" for this suite. • [SLOW TEST:113.944 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":330,"completed":275,"skipped":4955,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:00:55.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars Mar 22 02:00:55.204: INFO: Waiting up to 5m0s for pod "downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b" in namespace "downward-api-8872" to be "Succeeded or Failed" Mar 22 02:00:55.207: INFO: Pod "downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.994349ms Mar 22 02:00:57.213: INFO: Pod "downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008775959s Mar 22 02:00:59.219: INFO: Pod "downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014584589s STEP: Saw pod success Mar 22 02:00:59.219: INFO: Pod "downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b" satisfied condition "Succeeded or Failed" Mar 22 02:00:59.222: INFO: Trying to get logs from node latest-worker2 pod downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b container dapi-container: STEP: delete the pod Mar 22 02:00:59.429: INFO: Waiting for pod downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b to disappear Mar 22 02:00:59.447: INFO: Pod downward-api-8da019a8-df0b-4b2d-82fb-abc13ef04f6b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:00:59.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8872" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":330,"completed":276,"skipped":4964,"failed":18,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:00:59.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-4972 Mar 22 02:00:59.674: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:01:01.695: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:01:03.679: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Mar 22 02:01:03.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=services-4972 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Mar 22 02:01:07.575: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Mar 22 02:01:07.575: INFO: stdout: "iptables" Mar 22 02:01:07.575: INFO: proxyMode: iptables Mar 22 02:01:07.623: INFO: Waiting for pod kube-proxy-mode-detector to disappear Mar 22 02:01:07.639: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4972 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4972 I0322 02:01:07.918335 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4972, replica count: 3 I0322 02:01:10.970196 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0322 02:01:13.970858 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 22 02:01:13.986: INFO: Creating new exec pod E0322 02:01:18.024129 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 02:01:19.195358 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 02:01:22.162947 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 02:01:25.667935 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 02:01:34.693306 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 02:01:59.357366 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource E0322 02:02:49.283720 7 reflector.go:138] k8s.io/kubernetes/test/e2e/framework/service/jig.go:437: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: the server could not find the requested resource Mar 22 02:03:18.022: FAIL: Unexpected error: <*errors.errorString | 0xc004876030>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc000f59760, 0x73e8b88, 0xc0020ae840, 0xc00103b680) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc002c6a180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc002c6a180, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 Mar 22 02:03:18.023: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4972, will wait for the garbage collector to delete the pods Mar 22 02:03:18.142: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.222246ms Mar 22 02:03:18.743: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.909058ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-4972". STEP: Found 28 events. Mar 22 02:04:05.212: INFO: At 2021-03-22 02:00:59 +0000 UTC - event for kube-proxy-mode-detector: {default-scheduler } Scheduled: Successfully assigned services-4972/kube-proxy-mode-detector to latest-worker2 Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:00 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:01 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Created: Created container agnhost-container Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:02 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Started: Started container agnhost-container Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:07 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-wqjcx Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:07 +0000 UTC - event for kube-proxy-mode-detector: {kubelet latest-worker2} Killing: Stopping container agnhost-container Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:08 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-9dv2j Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:08 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-9mzt8 Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:08 +0000 UTC - event for affinity-nodeport-timeout-9dv2j: {default-scheduler } Scheduled: Successfully assigned services-4972/affinity-nodeport-timeout-9dv2j to latest-worker2 Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:08 +0000 UTC - event for affinity-nodeport-timeout-9mzt8: {default-scheduler } Scheduled: Successfully assigned services-4972/affinity-nodeport-timeout-9mzt8 to latest-worker2 Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:08 +0000 UTC - event for affinity-nodeport-timeout-wqjcx: {default-scheduler } Scheduled: Successfully assigned services-4972/affinity-nodeport-timeout-wqjcx to latest-worker Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:09 +0000 UTC - event for affinity-nodeport-timeout-wqjcx: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:10 +0000 UTC - event for affinity-nodeport-timeout-9dv2j: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:11 +0000 UTC - event for affinity-nodeport-timeout-9mzt8: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:11 +0000 UTC - event for affinity-nodeport-timeout-wqjcx: {kubelet latest-worker} Created: Created container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:12 +0000 UTC - event for affinity-nodeport-timeout-9dv2j: {kubelet latest-worker2} Created: Created container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:12 +0000 UTC - event for affinity-nodeport-timeout-9dv2j: {kubelet latest-worker2} Started: Started container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:12 +0000 UTC - event for affinity-nodeport-timeout-wqjcx: {kubelet latest-worker} Started: Started container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:13 +0000 UTC - event for affinity-nodeport-timeout-9mzt8: {kubelet latest-worker2} Started: Started container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:13 +0000 UTC - event for affinity-nodeport-timeout-9mzt8: {kubelet latest-worker2} Created: Created container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:14 +0000 UTC - event for execpod-affinitybx5b2: {default-scheduler } Scheduled: Successfully assigned services-4972/execpod-affinitybx5b2 to latest-worker Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:15 +0000 UTC - event for execpod-affinitybx5b2: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.28" already present on machine Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:16 +0000 UTC - event for execpod-affinitybx5b2: {kubelet latest-worker} Created: Created container agnhost-container Mar 22 02:04:05.212: INFO: At 2021-03-22 02:01:16 +0000 UTC - event for execpod-affinitybx5b2: {kubelet latest-worker} Started: Started container agnhost-container Mar 22 02:04:05.212: INFO: At 2021-03-22 02:03:18 +0000 UTC - event for affinity-nodeport-timeout-9dv2j: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:03:18 +0000 UTC - event for affinity-nodeport-timeout-9mzt8: {kubelet latest-worker2} Killing: Stopping container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:03:18 +0000 UTC - event for affinity-nodeport-timeout-wqjcx: {kubelet latest-worker} Killing: Stopping container affinity-nodeport-timeout Mar 22 02:04:05.212: INFO: At 2021-03-22 02:03:18 +0000 UTC - event for execpod-affinitybx5b2: {kubelet latest-worker} Killing: Stopping container agnhost-container Mar 22 02:04:05.215: INFO: POD NODE PHASE GRACE CONDITIONS Mar 22 02:04:05.215: INFO: Mar 22 02:04:05.218: INFO: Logging node info for node latest-control-plane Mar 22 02:04:05.220: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 7019386 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 01:59:52 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 01:59:52 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 01:59:52 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 01:59:52 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 02:04:05.220: INFO: Logging kubelet events for node latest-control-plane Mar 22 02:04:05.223: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 22 02:04:05.245: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container etcd ready: true, restart count 0 Mar 22 02:04:05.245: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 02:04:05.245: INFO: coredns-74ff55c5b-tqd5x started at 2021-03-22 00:01:48 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container coredns ready: true, restart count 0 Mar 22 02:04:05.245: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 22 02:04:05.245: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 22 02:04:05.245: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container kube-scheduler ready: true, restart count 0 Mar 22 02:04:05.245: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container kube-apiserver ready: true, restart count 0 Mar 22 02:04:05.245: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 02:04:05.245: INFO: coredns-74ff55c5b-9rxsk started at 2021-03-22 00:01:47 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.245: INFO: Container coredns ready: true, restart count 0 W0322 02:04:05.252191 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 02:04:05.338: INFO: Latency metrics for node latest-control-plane Mar 22 02:04:05.338: INFO: Logging node info for node latest-worker Mar 22 02:04:05.343: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 7020018 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:39:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-22 01:53:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 02:04:05.344: INFO: Logging kubelet events for node latest-worker Mar 22 02:04:05.347: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 22 02:04:05.372: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.372: INFO: Container kube-proxy ready: true, restart count 0 Mar 22 02:04:05.372: INFO: kindnet-l4mzm started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.372: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 02:04:05.372: INFO: chaos-daemon-vb9xf started at 2021-03-22 00:02:51 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.372: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 02:04:05.372: INFO: chaos-controller-manager-69c479c674-rdmrr started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.372: INFO: Container chaos-mesh ready: true, restart count 0 W0322 02:04:05.380573 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 02:04:05.632: INFO: Latency metrics for node latest-worker Mar 22 02:04:05.632: INFO: Logging node info for node latest-worker2 Mar 22 02:04:05.636: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 7020019 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-22 00:34:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-22 01:44:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-22 01:53:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-22 02:03:43 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 22 02:04:05.637: INFO: Logging kubelet events for node latest-worker2 Mar 22 02:04:05.639: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 22 02:04:05.654: INFO: chaos-daemon-4zjcg started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.654: INFO: Container chaos-daemon ready: true, restart count 0 Mar 22 02:04:05.654: INFO: kindnet-7qb7q started at 2021-03-22 00:02:52 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.654: INFO: Container kindnet-cni ready: true, restart count 0 Mar 22 02:04:05.654: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 22 02:04:05.655: INFO: Container kube-proxy ready: true, restart count 0 W0322 02:04:05.661034 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 02:04:05.922: INFO: Latency metrics for node latest-worker2 Mar 22 02:04:05.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4972" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [186.450 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 02:03:18.023: Unexpected error: <*errors.errorString | 0xc004876030>: { s: "no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s", } no subset of available IP address found for the endpoint affinity-nodeport-timeout within timeout 2m0s occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":330,"completed":276,"skipped":4975,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:04:05.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 22 02:04:06.031: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 22 02:04:06.060: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 22 02:04:06.060: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 22 02:04:06.120: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 22 02:04:06.121: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 22 02:04:06.184: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 22 02:04:06.184: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 22 02:04:13.888: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:04:13.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8673" for this suite. • [SLOW TEST:7.985 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":330,"completed":277,"skipped":4975,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:04:13.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 02:04:14.031: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0c8e9355-6c7b-46e9-9612-aeed81cf53db" in namespace "security-context-test-3234" to be "Succeeded or Failed" Mar 22 02:04:14.037: INFO: Pod "busybox-readonly-false-0c8e9355-6c7b-46e9-9612-aeed81cf53db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.831349ms Mar 22 02:04:16.267: INFO: Pod "busybox-readonly-false-0c8e9355-6c7b-46e9-9612-aeed81cf53db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236420305s Mar 22 02:04:18.278: INFO: Pod "busybox-readonly-false-0c8e9355-6c7b-46e9-9612-aeed81cf53db": Phase="Running", Reason="", readiness=true. Elapsed: 4.247850951s Mar 22 02:04:20.283: INFO: Pod "busybox-readonly-false-0c8e9355-6c7b-46e9-9612-aeed81cf53db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25232339s Mar 22 02:04:20.283: INFO: Pod "busybox-readonly-false-0c8e9355-6c7b-46e9-9612-aeed81cf53db" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:04:20.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3234" for this suite. • [SLOW TEST:6.373 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":330,"completed":278,"skipped":4976,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:04:20.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 22 02:04:20.790: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:04.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7795" for this suite. • [SLOW TEST:44.723 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":330,"completed":279,"skipped":4978,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:05.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods Mar 22 02:05:05.208: INFO: created test-pod-1 Mar 22 02:05:05.236: INFO: created test-pod-2 Mar 22 02:05:05.250: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:05.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4360" for this suite. •{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":330,"completed":280,"skipped":4988,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:05.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-e5d6b2fb-9f01-4572-8d32-9ae7ef5420ed STEP: Creating a pod to test consume secrets Mar 22 02:05:05.621: INFO: Waiting up to 5m0s for pod "pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba" in namespace "secrets-8903" to be "Succeeded or Failed" Mar 22 02:05:05.643: INFO: Pod "pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba": Phase="Pending", Reason="", readiness=false. Elapsed: 21.600364ms Mar 22 02:05:07.647: INFO: Pod "pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025976032s Mar 22 02:05:09.652: INFO: Pod "pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030454305s Mar 22 02:05:11.657: INFO: Pod "pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036062807s STEP: Saw pod success Mar 22 02:05:11.657: INFO: Pod "pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba" satisfied condition "Succeeded or Failed" Mar 22 02:05:11.661: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba container secret-volume-test: STEP: delete the pod Mar 22 02:05:11.715: INFO: Waiting for pod pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba to disappear Mar 22 02:05:11.727: INFO: Pod pod-secrets-9534c8c6-8cde-48d5-a4ad-c565c6f1f5ba no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8903" for this suite. • [SLOW TEST:6.295 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":330,"completed":281,"skipped":4994,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:11.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy Mar 22 02:05:11.898: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-8483 proxy --unix-socket=/tmp/kubectl-proxy-unix603191317/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:11.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8483" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":330,"completed":282,"skipped":5017,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:11.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:12.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1939" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":330,"completed":283,"skipped":5019,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:12.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 22 02:05:12.354: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-1889 ee3fe7a6-5b6a-41b9-8c25-a43b7795007c 7020435 0 2021-03-22 02:05:12 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-03-22 02:05:12 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n52pr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n52pr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.28,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n52pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 22 02:05:12.395: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:05:14.578: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:05:16.400: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:05:18.400: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 22 02:05:18.400: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1889 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 02:05:18.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... Mar 22 02:05:18.557: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1889 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 02:05:18.557: INFO: >>> kubeConfig: /root/.kube/config Mar 22 02:05:18.681: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:18.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1889" for this suite. • [SLOW TEST:6.523 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":330,"completed":284,"skipped":5020,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:18.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod Mar 22 02:05:19.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 create -f -' Mar 22 02:05:19.580: INFO: stderr: "" Mar 22 02:05:19.580: INFO: stdout: "pod/pause created\n" Mar 22 02:05:19.580: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 22 02:05:19.580: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7053" to be "running and ready" Mar 22 02:05:19.813: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 233.550516ms Mar 22 02:05:21.819: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239433738s Mar 22 02:05:23.823: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.243343324s Mar 22 02:05:23.823: INFO: Pod "pause" satisfied condition "running and ready" Mar 22 02:05:23.823: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod Mar 22 02:05:23.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 label pods pause testing-label=testing-label-value' Mar 22 02:05:23.939: INFO: stderr: "" Mar 22 02:05:23.939: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 22 02:05:23.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 get pod pause -L testing-label' Mar 22 02:05:24.038: INFO: stderr: "" Mar 22 02:05:24.038: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 22 02:05:24.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 label pods pause testing-label-' Mar 22 02:05:24.147: INFO: stderr: "" Mar 22 02:05:24.147: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 22 02:05:24.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 get pod pause -L testing-label' Mar 22 02:05:24.254: INFO: stderr: "" Mar 22 02:05:24.254: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources Mar 22 02:05:24.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 delete --grace-period=0 --force -f -' Mar 22 02:05:24.413: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 22 02:05:24.413: INFO: stdout: "pod \"pause\" force deleted\n" Mar 22 02:05:24.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 get rc,svc -l name=pause --no-headers' Mar 22 02:05:24.524: INFO: stderr: "No resources found in kubectl-7053 namespace.\n" Mar 22 02:05:24.524: INFO: stdout: "" Mar 22 02:05:24.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-7053 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 22 02:05:24.774: INFO: stderr: "" Mar 22 02:05:24.774: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:24.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7053" for this suite. • [SLOW TEST:6.058 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":330,"completed":285,"skipped":5099,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:24.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0f2ed8e3-c4a7-443c-90bb-7b581afa297f STEP: Creating a pod to test consume configMaps Mar 22 02:05:25.125: INFO: Waiting up to 5m0s for pod "pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18" in namespace "configmap-5234" to be "Succeeded or Failed" Mar 22 02:05:25.159: INFO: Pod "pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18": Phase="Pending", Reason="", readiness=false. Elapsed: 34.298696ms Mar 22 02:05:27.178: INFO: Pod "pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053411691s Mar 22 02:05:29.183: INFO: Pod "pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058006268s Mar 22 02:05:31.187: INFO: Pod "pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06271672s STEP: Saw pod success Mar 22 02:05:31.187: INFO: Pod "pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18" satisfied condition "Succeeded or Failed" Mar 22 02:05:31.191: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18 container agnhost-container: STEP: delete the pod Mar 22 02:05:31.220: INFO: Waiting for pod pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18 to disappear Mar 22 02:05:31.291: INFO: Pod pod-configmaps-79be2ef2-429a-4262-8a71-137c8b62cc18 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:31.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5234" for this suite. • [SLOW TEST:6.470 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":286,"skipped":5111,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:31.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args Mar 22 02:05:31.449: INFO: Waiting up to 5m0s for pod "var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda" in namespace "var-expansion-1167" to be "Succeeded or Failed" Mar 22 02:05:31.452: INFO: Pod "var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958423ms Mar 22 02:05:33.457: INFO: Pod "var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007679835s Mar 22 02:05:35.461: INFO: Pod "var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011860918s STEP: Saw pod success Mar 22 02:05:35.461: INFO: Pod "var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda" satisfied condition "Succeeded or Failed" Mar 22 02:05:35.463: INFO: Trying to get logs from node latest-worker pod var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda container dapi-container: STEP: delete the pod Mar 22 02:05:35.585: INFO: Waiting for pod var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda to disappear Mar 22 02:05:35.610: INFO: Pod var-expansion-21766526-6f02-4ac5-a553-a9050d83dfda no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:35.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1167" for this suite. •{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":330,"completed":287,"skipped":5111,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:35.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 22 02:05:36.508: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created Mar 22 02:05:38.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 02:05:40.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975536, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-b7c59d94\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 22 02:05:43.592: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 02:05:43.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:44.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7742" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.349 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":330,"completed":288,"skipped":5124,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:44.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC Mar 22 02:05:45.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2746 create -f -' Mar 22 02:05:45.511: INFO: stderr: "" Mar 22 02:05:45.511: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Mar 22 02:05:46.514: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 02:05:46.514: INFO: Found 0 / 1 Mar 22 02:05:47.592: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 02:05:47.592: INFO: Found 0 / 1 Mar 22 02:05:48.526: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 02:05:48.526: INFO: Found 0 / 1 Mar 22 02:05:49.517: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 02:05:49.517: INFO: Found 1 / 1 Mar 22 02:05:49.517: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 22 02:05:49.521: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 02:05:49.521: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 22 02:05:49.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=kubectl-2746 patch pod agnhost-primary-x4zrw -p {"metadata":{"annotations":{"x":"y"}}}' Mar 22 02:05:49.634: INFO: stderr: "" Mar 22 02:05:49.634: INFO: stdout: "pod/agnhost-primary-x4zrw patched\n" STEP: checking annotations Mar 22 02:05:49.640: INFO: Selector matched 1 pods for map[app:agnhost] Mar 22 02:05:49.640: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:49.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2746" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":330,"completed":289,"skipped":5125,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:49.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-d6e27f80-89af-49c6-a423-310c1024b555 STEP: Creating a pod to test consume configMaps Mar 22 02:05:49.759: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c" in namespace "projected-3335" to be "Succeeded or Failed" Mar 22 02:05:49.801: INFO: Pod "pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.620455ms Mar 22 02:05:51.807: INFO: Pod "pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047838964s Mar 22 02:05:53.812: INFO: Pod "pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052862212s STEP: Saw pod success Mar 22 02:05:53.812: INFO: Pod "pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c" satisfied condition "Succeeded or Failed" Mar 22 02:05:53.815: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c container agnhost-container: STEP: delete the pod Mar 22 02:05:53.886: INFO: Waiting for pod pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c to disappear Mar 22 02:05:53.975: INFO: Pod pod-projected-configmaps-3b23fff0-49ad-464c-a120-d7304f0d007c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:05:53.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3335" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":290,"skipped":5125,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:05:53.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 22 02:05:54.069: INFO: Waiting up to 5m0s for pod "security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10" in namespace "security-context-8562" to be "Succeeded or Failed" Mar 22 02:05:54.118: INFO: Pod "security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10": Phase="Pending", Reason="", readiness=false. Elapsed: 49.327561ms Mar 22 02:05:56.274: INFO: Pod "security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205087026s Mar 22 02:05:58.279: INFO: Pod "security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209515559s Mar 22 02:06:00.284: INFO: Pod "security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.214503758s STEP: Saw pod success Mar 22 02:06:00.284: INFO: Pod "security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10" satisfied condition "Succeeded or Failed" Mar 22 02:06:00.287: INFO: Trying to get logs from node latest-worker pod security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10 container test-container: STEP: delete the pod Mar 22 02:06:00.514: INFO: Waiting for pod security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10 to disappear Mar 22 02:06:00.533: INFO: Pod security-context-1128e1fd-5b95-4f8e-8cf9-532030f80b10 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:06:00.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8562" for this suite. • [SLOW TEST:6.613 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":330,"completed":291,"skipped":5127,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:06:00.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-87389ee3-31b8-459d-8d44-d2e8b50cf9f0 STEP: Creating a pod to test consume configMaps Mar 22 02:06:00.811: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6" in namespace "projected-1728" to be "Succeeded or Failed" Mar 22 02:06:00.814: INFO: Pod "pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.948841ms Mar 22 02:06:02.819: INFO: Pod "pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007450032s Mar 22 02:06:04.999: INFO: Pod "pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.188024187s STEP: Saw pod success Mar 22 02:06:04.999: INFO: Pod "pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6" satisfied condition "Succeeded or Failed" Mar 22 02:06:05.003: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6 container projected-configmap-volume-test: STEP: delete the pod Mar 22 02:06:05.056: INFO: Waiting for pod pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6 to disappear Mar 22 02:06:05.072: INFO: Pod pod-projected-configmaps-53916452-e322-4cac-8277-af6a0defd5f6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:06:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1728" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":330,"completed":292,"skipped":5128,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:06:05.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin Mar 22 02:06:05.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81" in namespace "downward-api-4171" to be "Succeeded or Failed" Mar 22 02:06:05.216: INFO: Pod "downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81": Phase="Pending", Reason="", readiness=false. Elapsed: 33.382783ms Mar 22 02:06:07.299: INFO: Pod "downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116687716s Mar 22 02:06:09.304: INFO: Pod "downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121173556s STEP: Saw pod success Mar 22 02:06:09.304: INFO: Pod "downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81" satisfied condition "Succeeded or Failed" Mar 22 02:06:09.307: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81 container client-container: STEP: delete the pod Mar 22 02:06:09.329: INFO: Waiting for pod downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81 to disappear Mar 22 02:06:09.348: INFO: Pod downwardapi-volume-cba56059-3c17-40e7-b5d1-bcadf5391a81 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:06:09.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4171" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":330,"completed":293,"skipped":5132,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:06:09.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:06:09.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8955" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":330,"completed":294,"skipped":5133,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:06:09.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 02:06:09.672: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:06:15.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3952" for this suite. • [SLOW TEST:6.386 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":330,"completed":295,"skipped":5194,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:06:15.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 22 02:06:16.108: INFO: Waiting up to 5m0s for pod "pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce" in namespace "emptydir-483" to be "Succeeded or Failed" Mar 22 02:06:16.131: INFO: Pod "pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce": Phase="Pending", Reason="", readiness=false. Elapsed: 22.592147ms Mar 22 02:06:18.135: INFO: Pod "pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026729013s Mar 22 02:06:20.140: INFO: Pod "pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031444731s STEP: Saw pod success Mar 22 02:06:20.140: INFO: Pod "pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce" satisfied condition "Succeeded or Failed" Mar 22 02:06:20.143: INFO: Trying to get logs from node latest-worker pod pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce container test-container: STEP: delete the pod Mar 22 02:06:20.177: INFO: Waiting for pod pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce to disappear Mar 22 02:06:20.184: INFO: Pod pod-b4f80653-473b-4bac-88e6-9a2aa3d3cfce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:06:20.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-483" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":296,"skipped":5200,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:06:20.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0322 02:07:00.852624 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 22 02:08:02.873: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Mar 22 02:08:02.873: INFO: Deleting pod "simpletest.rc-4z7g2" in namespace "gc-7964" Mar 22 02:08:03.197: INFO: Deleting pod "simpletest.rc-6z6v5" in namespace "gc-7964" Mar 22 02:08:03.358: INFO: Deleting pod "simpletest.rc-8zghs" in namespace "gc-7964" Mar 22 02:08:03.687: INFO: Deleting pod "simpletest.rc-dn99j" in namespace "gc-7964" Mar 22 02:08:04.114: INFO: Deleting pod "simpletest.rc-fwms2" in namespace "gc-7964" Mar 22 02:08:04.214: INFO: Deleting pod "simpletest.rc-kgk2x" in namespace "gc-7964" Mar 22 02:08:04.719: INFO: Deleting pod "simpletest.rc-n9wjh" in namespace "gc-7964" Mar 22 02:08:04.793: INFO: Deleting pod "simpletest.rc-ptkt5" in namespace "gc-7964" Mar 22 02:08:05.133: INFO: Deleting pod "simpletest.rc-vxg4s" in namespace "gc-7964" Mar 22 02:08:05.372: INFO: Deleting pod "simpletest.rc-xckhp" in namespace "gc-7964" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:08:05.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7964" for this suite. • [SLOW TEST:105.704 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":330,"completed":297,"skipped":5221,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:08:05.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 22 02:08:06.790: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3104 031892fb-ed06-49cf-9aed-f80ae4b76515 7021378 0 2021-03-22 02:08:06 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-03-22 02:08:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 22 02:08:06.790: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3104 031892fb-ed06-49cf-9aed-f80ae4b76515 7021379 0 2021-03-22 02:08:06 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-03-22 02:08:06 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:08:06.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3104" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":330,"completed":298,"skipped":5224,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:08:06.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Mar 22 02:08:06.992: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. Mar 22 02:08:07.798: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 22 02:08:09.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975688, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975688, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975688, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975687, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 02:08:12.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975688, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975688, loc:(*time.Location)(0x99208a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975688, loc:(*time.Location)(0x99208a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63751975687, loc:(*time.Location)(0x99208a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 22 02:08:14.807: INFO: Waited 848.310693ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Mar 22 02:08:14.954: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:08:15.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6791" for this suite. • [SLOW TEST:9.331 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":330,"completed":299,"skipped":5232,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:08:16.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 22 02:08:16.266: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 22 02:08:29.721: INFO: >>> kubeConfig: /root/.kube/config Mar 22 02:08:33.300: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:08:47.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5063" for this suite. • [SLOW TEST:31.006 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":330,"completed":300,"skipped":5234,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:08:47.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-5920 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5920 STEP: Deleting pre-stop pod Mar 22 02:09:00.388: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:09:00.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5920" for this suite. • [SLOW TEST:13.310 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":330,"completed":301,"skipped":5267,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:09:00.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 02:09:05.026: INFO: Deleting pod "var-expansion-e0475a75-8f9e-482b-a1e1-cb59bb2d6e80" in namespace "var-expansion-6217" Mar 22 02:09:05.029: INFO: Wait up to 5m0s for pod "var-expansion-e0475a75-8f9e-482b-a1e1-cb59bb2d6e80" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:09:47.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6217" for this suite. • [SLOW TEST:46.601 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":330,"completed":302,"skipped":5293,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:09:47.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6946, will wait for the garbage collector to delete the pods Mar 22 02:09:53.274: INFO: Deleting Job.batch foo took: 16.037538ms Mar 22 02:09:53.875: INFO: Terminating Job.batch foo pods took: 600.918219ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:10:45.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6946" for this suite. • [SLOW TEST:58.477 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":330,"completed":303,"skipped":5316,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:10:45.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:10:45.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3567" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":330,"completed":304,"skipped":5318,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:10:45.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-00227b27-950e-440f-aaa2-9b67b98232f3 STEP: Creating a pod to test consume secrets Mar 22 02:10:45.760: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d" in namespace "projected-493" to be "Succeeded or Failed" Mar 22 02:10:45.781: INFO: Pod "pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.128757ms Mar 22 02:10:48.034: INFO: Pod "pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.274060428s Mar 22 02:10:50.040: INFO: Pod "pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.280104405s STEP: Saw pod success Mar 22 02:10:50.040: INFO: Pod "pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d" satisfied condition "Succeeded or Failed" Mar 22 02:10:50.045: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d container projected-secret-volume-test: STEP: delete the pod Mar 22 02:10:50.105: INFO: Waiting for pod pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d to disappear Mar 22 02:10:50.118: INFO: Pod pod-projected-secrets-68b13b1c-63dc-43e9-bc68-84c113c4859d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:10:50.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-493" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":330,"completed":305,"skipped":5340,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:10:50.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Mar 22 02:10:50.918: INFO: starting watch STEP: patching STEP: updating Mar 22 02:10:50.930: INFO: waiting for watch events with expected annotations Mar 22 02:10:50.930: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:10:51.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-2753" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":330,"completed":306,"skipped":5355,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:10:51.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:10:51.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4830" for this suite. •{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":330,"completed":307,"skipped":5365,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:10:51.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-96f145d0-56b3-4e3c-94b0-871426e2a6d6 STEP: Creating a pod to test consume secrets Mar 22 02:10:51.664: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992" in namespace "projected-6785" to be "Succeeded or Failed" Mar 22 02:10:51.667: INFO: Pod "pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992": Phase="Pending", Reason="", readiness=false. Elapsed: 3.738011ms Mar 22 02:10:53.673: INFO: Pod "pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009571501s Mar 22 02:10:55.691: INFO: Pod "pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992": Phase="Running", Reason="", readiness=true. Elapsed: 4.027781686s Mar 22 02:10:57.697: INFO: Pod "pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033510664s STEP: Saw pod success Mar 22 02:10:57.697: INFO: Pod "pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992" satisfied condition "Succeeded or Failed" Mar 22 02:10:57.700: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992 container projected-secret-volume-test: STEP: delete the pod Mar 22 02:10:57.740: INFO: Waiting for pod pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992 to disappear Mar 22 02:10:57.769: INFO: Pod pod-projected-secrets-99a46d9f-47fa-49de-9620-b64627b60992 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:10:57.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6785" for this suite. • [SLOW TEST:6.257 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":308,"skipped":5378,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:10:57.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 22 02:10:57.919: INFO: Waiting up to 5m0s for pod "pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea" in namespace "emptydir-9301" to be "Succeeded or Failed" Mar 22 02:10:57.936: INFO: Pod "pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 17.487818ms Mar 22 02:10:59.980: INFO: Pod "pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060853001s Mar 22 02:11:01.985: INFO: Pod "pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066586033s STEP: Saw pod success Mar 22 02:11:01.985: INFO: Pod "pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea" satisfied condition "Succeeded or Failed" Mar 22 02:11:01.989: INFO: Trying to get logs from node latest-worker pod pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea container test-container: STEP: delete the pod Mar 22 02:11:02.028: INFO: Waiting for pod pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea to disappear Mar 22 02:11:02.057: INFO: Pod pod-028c7fb3-8e5c-41fb-af81-4bdee37ae8ea no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:11:02.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9301" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":309,"skipped":5378,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:11:02.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version Mar 22 02:11:02.127: INFO: Major version: 1 STEP: Confirm minor version Mar 22 02:11:02.127: INFO: cleanMinorVersion: 21 Mar 22 02:11:02.127: INFO: Minor version: 21+ [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:11:02.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-1452" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":330,"completed":310,"skipped":5383,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 02:11:02.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Mar 22 02:11:02.334: INFO: The status of Pod busybox-readonly-fscd5b7aa3-13f4-4cd7-9fac-8c27c32277b7 is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:11:04.339: INFO: The status of Pod busybox-readonly-fscd5b7aa3-13f4-4cd7-9fac-8c27c32277b7 is Pending, waiting for it to be Running (with Ready = true) Mar 22 02:11:06.340: INFO: The status of Pod busybox-readonly-fscd5b7aa3-13f4-4cd7-9fac-8c27c32277b7 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 02:11:06.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-102" for this suite. •{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":330,"completed":311,"skipped":5398,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSMar 22 02:11:06.359: INFO: Running AfterSuite actions on all nodes Mar 22 02:11:06.359: INFO: Running AfterSuite actions on node 1 Mar 22 02:11:06.359: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":330,"completed":311,"skipped":5407,"failed":19,"failures":["[sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","[sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","[sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","[sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","[sig-apps] CronJob should support CronJob API operations [Conformance]","[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","[sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} Summarizing 19 Failures: [Fail] [sig-apps] CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:132 [Fail] [sig-node] Probing container [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:607 [Fail] [sig-apps] CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:168 [Fail] [sig-apps] CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:77 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1312 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] EndpointSlice [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:70 [Fail] [sig-network] EndpointSliceMirroring [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:442 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 [Fail] [sig-network] Services [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2563 [Fail] [sig-network] Services [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 [Fail] [sig-network] EndpointSlice [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-apps] CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:327 [Fail] [sig-network] EndpointSlice [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:522 [Fail] [sig-apps] CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:106 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2484 Ran 330 of 5737 Specs in 10222.671 seconds FAIL! -- 311 Passed | 19 Failed | 0 Pending | 5407 Skipped --- FAIL: TestE2E (10222.78s) FAIL