I0202 22:32:53.238134 7 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0202 22:32:53.238315 7 e2e.go:129] Starting e2e run "d8acb757-e32d-4ee4-8616-e2f5848ca1dd" on Ginkgo node 1 {"msg":"Test Suite starting","total":309,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1612305171 - Will randomize all specs Will run 309 of 5667 specs Feb 2 22:32:53.312: INFO: >>> kubeConfig: /root/.kube/config Feb 2 22:32:53.315: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 22:32:53.335: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 22:32:53.370: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 22:32:53.370: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 22:32:53.370: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 22:32:53.376: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 2 22:32:53.376: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 22:32:53.376: INFO: e2e test version: v1.20.1 Feb 2 22:32:53.377: INFO: kube-apiserver version: v1.20.0 Feb 2 22:32:53.377: INFO: >>> kubeConfig: /root/.kube/config Feb 2 22:32:53.381: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:32:53.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Feb 2 22:32:53.711: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:32:53.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d" in namespace "downward-api-4133" to be "Succeeded or Failed" Feb 2 22:32:53.774: INFO: Pod "downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d": Phase="Pending", Reason="", readiness=false. Elapsed: 54.577097ms Feb 2 22:32:56.929: INFO: Pod "downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.209579673s Feb 2 22:32:58.933: INFO: Pod "downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.214332844s Feb 2 22:33:00.938: INFO: Pod "downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.21917692s STEP: Saw pod success Feb 2 22:33:00.938: INFO: Pod "downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d" satisfied condition "Succeeded or Failed" Feb 2 22:33:00.942: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d container client-container: STEP: delete the pod Feb 2 22:33:00.991: INFO: Waiting for pod downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d to disappear Feb 2 22:33:01.008: INFO: Pod downwardapi-volume-7d6f8a19-6a03-4430-afb3-25f4e565292d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:33:01.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4133" for this suite. • [SLOW TEST:7.637 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":1,"skipped":17,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:33:01.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 2 22:33:01.110: INFO: Waiting up to 5m0s for pod "pod-ab653194-bbdb-45c9-b93b-f74b42e7777d" in namespace "emptydir-1715" to be "Succeeded or Failed" Feb 2 22:33:01.120: INFO: Pod "pod-ab653194-bbdb-45c9-b93b-f74b42e7777d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.334003ms Feb 2 22:33:03.124: INFO: Pod "pod-ab653194-bbdb-45c9-b93b-f74b42e7777d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013313925s Feb 2 22:33:05.129: INFO: Pod "pod-ab653194-bbdb-45c9-b93b-f74b42e7777d": Phase="Running", Reason="", readiness=true. Elapsed: 4.018207111s Feb 2 22:33:07.134: INFO: Pod "pod-ab653194-bbdb-45c9-b93b-f74b42e7777d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023542682s STEP: Saw pod success Feb 2 22:33:07.134: INFO: Pod "pod-ab653194-bbdb-45c9-b93b-f74b42e7777d" satisfied condition "Succeeded or Failed" Feb 2 22:33:07.137: INFO: Trying to get logs from node leguer-worker pod pod-ab653194-bbdb-45c9-b93b-f74b42e7777d container test-container: STEP: delete the pod Feb 2 22:33:07.157: INFO: Waiting for pod pod-ab653194-bbdb-45c9-b93b-f74b42e7777d to disappear Feb 2 22:33:07.161: INFO: Pod pod-ab653194-bbdb-45c9-b93b-f74b42e7777d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:33:07.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1715" for this suite. • [SLOW TEST:6.190 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":2,"skipped":51,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:33:07.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on node default medium Feb 2 22:33:07.296: INFO: Waiting up to 5m0s for pod "pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a" in namespace "emptydir-9997" to be "Succeeded or Failed" Feb 2 22:33:07.328: INFO: Pod "pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.141988ms Feb 2 22:33:09.332: INFO: Pod "pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035420195s Feb 2 22:33:11.337: INFO: Pod "pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040535961s STEP: Saw pod success Feb 2 22:33:11.337: INFO: Pod "pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a" satisfied condition "Succeeded or Failed" Feb 2 22:33:11.343: INFO: Trying to get logs from node leguer-worker pod pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a container test-container: STEP: delete the pod Feb 2 22:33:11.386: INFO: Waiting for pod pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a to disappear Feb 2 22:33:11.401: INFO: Pod pod-6d8bb569-2097-411b-b270-0e2b17ba3e3a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:33:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9997" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":3,"skipped":56,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:33:11.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8210 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Feb 2 22:33:11.590: INFO: Found 0 stateful pods, waiting for 3 Feb 2 22:33:21.594: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:33:21.594: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:33:21.594: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 2 22:33:31.596: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:33:31.596: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:33:31.596: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:33:31.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8210 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 22:33:34.775: INFO: stderr: "I0202 22:33:34.622024 28 log.go:181] (0xc000da0000) (0xc000ae8140) Create stream\nI0202 22:33:34.622139 28 log.go:181] (0xc000da0000) (0xc000ae8140) Stream added, broadcasting: 1\nI0202 22:33:34.626715 28 log.go:181] (0xc000da0000) Reply frame received for 1\nI0202 22:33:34.626772 28 log.go:181] (0xc000da0000) (0xc00089d040) Create stream\nI0202 22:33:34.626787 28 log.go:181] (0xc000da0000) (0xc00089d040) Stream added, broadcasting: 3\nI0202 22:33:34.627938 28 log.go:181] (0xc000da0000) Reply frame received for 3\nI0202 22:33:34.627986 28 log.go:181] (0xc000da0000) (0xc00089d2c0) Create stream\nI0202 22:33:34.627998 28 log.go:181] (0xc000da0000) (0xc00089d2c0) Stream added, broadcasting: 5\nI0202 22:33:34.629161 28 log.go:181] (0xc000da0000) Reply frame received for 5\nI0202 22:33:34.724103 28 log.go:181] (0xc000da0000) Data frame received for 5\nI0202 22:33:34.724138 28 log.go:181] (0xc00089d2c0) (5) Data frame handling\nI0202 22:33:34.724164 28 log.go:181] (0xc00089d2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0202 22:33:34.763811 28 log.go:181] (0xc000da0000) Data frame received for 3\nI0202 22:33:34.763840 28 log.go:181] (0xc00089d040) (3) Data frame handling\nI0202 22:33:34.763854 28 log.go:181] (0xc00089d040) (3) Data frame sent\nI0202 22:33:34.763862 28 log.go:181] (0xc000da0000) Data frame received for 3\nI0202 22:33:34.763870 28 log.go:181] (0xc00089d040) (3) Data frame handling\nI0202 22:33:34.764303 28 log.go:181] (0xc000da0000) Data frame received for 5\nI0202 22:33:34.764324 28 log.go:181] (0xc00089d2c0) (5) Data frame handling\nI0202 22:33:34.766912 28 log.go:181] (0xc000da0000) Data frame received for 1\nI0202 22:33:34.766929 28 log.go:181] (0xc000ae8140) (1) Data frame handling\nI0202 22:33:34.766946 28 log.go:181] (0xc000ae8140) (1) Data frame sent\nI0202 22:33:34.767096 28 log.go:181] (0xc000da0000) (0xc000ae8140) Stream removed, broadcasting: 1\nI0202 22:33:34.767132 28 log.go:181] (0xc000da0000) Go away received\nI0202 22:33:34.767821 28 log.go:181] (0xc000da0000) (0xc000ae8140) Stream removed, broadcasting: 1\nI0202 22:33:34.767853 28 log.go:181] (0xc000da0000) (0xc00089d040) Stream removed, broadcasting: 3\nI0202 22:33:34.767873 28 log.go:181] (0xc000da0000) (0xc00089d2c0) Stream removed, broadcasting: 5\n" Feb 2 22:33:34.775: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 22:33:34.775: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 2 22:33:44.810: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 2 22:33:54.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8210 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 22:33:55.119: INFO: stderr: "I0202 22:33:55.006387 46 log.go:181] (0xc000142370) (0xc000e16000) Create stream\nI0202 22:33:55.006454 46 log.go:181] (0xc000142370) (0xc000e16000) Stream added, broadcasting: 1\nI0202 22:33:55.008405 46 log.go:181] (0xc000142370) Reply frame received for 1\nI0202 22:33:55.008444 46 log.go:181] (0xc000142370) (0xc000ba01e0) Create stream\nI0202 22:33:55.008473 46 log.go:181] (0xc000142370) (0xc000ba01e0) Stream added, broadcasting: 3\nI0202 22:33:55.009540 46 log.go:181] (0xc000142370) Reply frame received for 3\nI0202 22:33:55.009582 46 log.go:181] (0xc000142370) (0xc0003277c0) Create stream\nI0202 22:33:55.009594 46 log.go:181] (0xc000142370) (0xc0003277c0) Stream added, broadcasting: 5\nI0202 22:33:55.010534 46 log.go:181] (0xc000142370) Reply frame received for 5\nI0202 22:33:55.110874 46 log.go:181] (0xc000142370) Data frame received for 3\nI0202 22:33:55.110919 46 log.go:181] (0xc000ba01e0) (3) Data frame handling\nI0202 22:33:55.110935 46 log.go:181] (0xc000ba01e0) (3) Data frame sent\nI0202 22:33:55.110948 46 log.go:181] (0xc000142370) Data frame received for 3\nI0202 22:33:55.110959 46 log.go:181] (0xc000ba01e0) (3) Data frame handling\nI0202 22:33:55.111011 46 log.go:181] (0xc000142370) Data frame received for 5\nI0202 22:33:55.111038 46 log.go:181] (0xc0003277c0) (5) Data frame handling\nI0202 22:33:55.111058 46 log.go:181] (0xc0003277c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0202 22:33:55.111078 46 log.go:181] (0xc000142370) Data frame received for 5\nI0202 22:33:55.111098 46 log.go:181] (0xc0003277c0) (5) Data frame handling\nI0202 22:33:55.112400 46 log.go:181] (0xc000142370) Data frame received for 1\nI0202 22:33:55.112426 46 log.go:181] (0xc000e16000) (1) Data frame handling\nI0202 22:33:55.112459 46 log.go:181] (0xc000e16000) (1) Data frame sent\nI0202 22:33:55.112498 46 log.go:181] (0xc000142370) (0xc000e16000) Stream removed, broadcasting: 1\nI0202 22:33:55.112786 46 log.go:181] (0xc000142370) Go away received\nI0202 22:33:55.113179 46 log.go:181] (0xc000142370) (0xc000e16000) Stream removed, broadcasting: 1\nI0202 22:33:55.113224 46 log.go:181] (0xc000142370) (0xc000ba01e0) Stream removed, broadcasting: 3\nI0202 22:33:55.113254 46 log.go:181] (0xc000142370) (0xc0003277c0) Stream removed, broadcasting: 5\n" Feb 2 22:33:55.119: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 22:33:55.119: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 22:34:05.139: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:34:05.139: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:05.139: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:05.139: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:15.147: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:34:15.147: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:15.147: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:15.147: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:25.145: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:34:25.145: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:25.145: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:25.145: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:35.148: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:34:35.148: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:35.148: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:35.148: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:45.258: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:34:45.258: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:45.258: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:55.149: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:34:55.149: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:34:55.149: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:05.147: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:35:05.147: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:05.147: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:15.174: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:35:15.174: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:15.174: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:25.145: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:35:25.145: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:25.145: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:35.171: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:35:35.171: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:35.171: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:45.145: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:35:45.145: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:35:55.148: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:35:55.148: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:36:05.148: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:36:05.148: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:36:15.146: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:36:15.146: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:36:25.145: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:36:25.146: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:36:35.150: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:36:35.151: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Feb 2 22:36:45.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8210 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 22:36:45.528: INFO: stderr: "I0202 22:36:45.282219 64 log.go:181] (0xc0008e5080) (0xc000be41e0) Create stream\nI0202 22:36:45.282302 64 log.go:181] (0xc0008e5080) (0xc000be41e0) Stream added, broadcasting: 1\nI0202 22:36:45.284076 64 log.go:181] (0xc0008e5080) Reply frame received for 1\nI0202 22:36:45.284141 64 log.go:181] (0xc0008e5080) (0xc000be4280) Create stream\nI0202 22:36:45.284169 64 log.go:181] (0xc0008e5080) (0xc000be4280) Stream added, broadcasting: 3\nI0202 22:36:45.285231 64 log.go:181] (0xc0008e5080) Reply frame received for 3\nI0202 22:36:45.285297 64 log.go:181] (0xc0008e5080) (0xc000556000) Create stream\nI0202 22:36:45.285310 64 log.go:181] (0xc0008e5080) (0xc000556000) Stream added, broadcasting: 5\nI0202 22:36:45.286494 64 log.go:181] (0xc0008e5080) Reply frame received for 5\nI0202 22:36:45.385380 64 log.go:181] (0xc0008e5080) Data frame received for 5\nI0202 22:36:45.385408 64 log.go:181] (0xc000556000) (5) Data frame handling\nI0202 22:36:45.385426 64 log.go:181] (0xc000556000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0202 22:36:45.519350 64 log.go:181] (0xc0008e5080) Data frame received for 3\nI0202 22:36:45.519375 64 log.go:181] (0xc000be4280) (3) Data frame handling\nI0202 22:36:45.519386 64 log.go:181] (0xc000be4280) (3) Data frame sent\nI0202 22:36:45.519619 64 log.go:181] (0xc0008e5080) Data frame received for 5\nI0202 22:36:45.519633 64 log.go:181] (0xc000556000) (5) Data frame handling\nI0202 22:36:45.519812 64 log.go:181] (0xc0008e5080) Data frame received for 3\nI0202 22:36:45.519823 64 log.go:181] (0xc000be4280) (3) Data frame handling\nI0202 22:36:45.521889 64 log.go:181] (0xc0008e5080) Data frame received for 1\nI0202 22:36:45.521909 64 log.go:181] (0xc000be41e0) (1) Data frame handling\nI0202 22:36:45.521919 64 log.go:181] (0xc000be41e0) (1) Data frame sent\nI0202 22:36:45.521930 64 log.go:181] (0xc0008e5080) (0xc000be41e0) Stream removed, broadcasting: 1\nI0202 22:36:45.522257 64 log.go:181] (0xc0008e5080) (0xc000be41e0) Stream removed, broadcasting: 1\nI0202 22:36:45.522275 64 log.go:181] (0xc0008e5080) (0xc000be4280) Stream removed, broadcasting: 3\nI0202 22:36:45.522284 64 log.go:181] (0xc0008e5080) (0xc000556000) Stream removed, broadcasting: 5\n" Feb 2 22:36:45.528: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 22:36:45.528: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 22:36:55.569: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 2 22:37:05.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8210 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 22:37:05.853: INFO: stderr: "I0202 22:37:05.760364 82 log.go:181] (0xc00003a420) (0xc000b681e0) Create stream\nI0202 22:37:05.760424 82 log.go:181] (0xc00003a420) (0xc000b681e0) Stream added, broadcasting: 1\nI0202 22:37:05.762560 82 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 22:37:05.762607 82 log.go:181] (0xc00003a420) (0xc000397360) Create stream\nI0202 22:37:05.762619 82 log.go:181] (0xc00003a420) (0xc000397360) Stream added, broadcasting: 3\nI0202 22:37:05.763807 82 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 22:37:05.763846 82 log.go:181] (0xc00003a420) (0xc000b90280) Create stream\nI0202 22:37:05.763863 82 log.go:181] (0xc00003a420) (0xc000b90280) Stream added, broadcasting: 5\nI0202 22:37:05.765031 82 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 22:37:05.846316 82 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 22:37:05.846340 82 log.go:181] (0xc000397360) (3) Data frame handling\nI0202 22:37:05.846348 82 log.go:181] (0xc000397360) (3) Data frame sent\nI0202 22:37:05.846353 82 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 22:37:05.846357 82 log.go:181] (0xc000397360) (3) Data frame handling\nI0202 22:37:05.846387 82 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 22:37:05.846413 82 log.go:181] (0xc000b90280) (5) Data frame handling\nI0202 22:37:05.846446 82 log.go:181] (0xc000b90280) (5) Data frame sent\nI0202 22:37:05.846462 82 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 22:37:05.846480 82 log.go:181] (0xc000b90280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0202 22:37:05.848093 82 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 22:37:05.848135 82 log.go:181] (0xc000b681e0) (1) Data frame handling\nI0202 22:37:05.848163 82 log.go:181] (0xc000b681e0) (1) Data frame sent\nI0202 22:37:05.848176 82 log.go:181] (0xc00003a420) (0xc000b681e0) Stream removed, broadcasting: 1\nI0202 22:37:05.848197 82 log.go:181] (0xc00003a420) Go away received\nI0202 22:37:05.848501 82 log.go:181] (0xc00003a420) (0xc000b681e0) Stream removed, broadcasting: 1\nI0202 22:37:05.848522 82 log.go:181] (0xc00003a420) (0xc000397360) Stream removed, broadcasting: 3\nI0202 22:37:05.848532 82 log.go:181] (0xc00003a420) (0xc000b90280) Stream removed, broadcasting: 5\n" Feb 2 22:37:05.853: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 22:37:05.853: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 22:37:15.900: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:37:15.900: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:15.900: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:15.900: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:25.908: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:37:25.908: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:25.908: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:25.908: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:35.909: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:37:35.909: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:35.909: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:35.909: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:45.906: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:37:45.906: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:45.906: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:45.906: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:55.909: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:37:55.909: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:55.909: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:37:55.909: INFO: Waiting for Pod statefulset-8210/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:05.925: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:38:05.925: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:05.925: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:15.909: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:38:15.909: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:15.909: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:25.910: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:38:25.910: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:25.910: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:35.909: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:38:35.909: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:35.909: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:45.908: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:38:45.908: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:45.908: INFO: Waiting for Pod statefulset-8210/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 2 22:38:55.924: INFO: Waiting for StatefulSet statefulset-8210/ss2 to complete update Feb 2 22:38:55.924: INFO: Waiting for Pod statefulset-8210/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 2 22:39:05.909: INFO: Deleting all statefulset in ns statefulset-8210 Feb 2 22:39:05.930: INFO: Scaling statefulset ss2 to 0 Feb 2 22:39:55.964: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 22:39:55.967: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:39:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8210" for this suite. • [SLOW TEST:404.558 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":309,"completed":4,"skipped":56,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:39:55.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:39:56.132: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900" in namespace "downward-api-2032" to be "Succeeded or Failed" Feb 2 22:39:56.135: INFO: Pod "downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900": Phase="Pending", Reason="", readiness=false. Elapsed: 3.217193ms Feb 2 22:39:58.471: INFO: Pod "downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339408059s Feb 2 22:40:00.494: INFO: Pod "downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.362415313s STEP: Saw pod success Feb 2 22:40:00.494: INFO: Pod "downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900" satisfied condition "Succeeded or Failed" Feb 2 22:40:00.497: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900 container client-container: STEP: delete the pod Feb 2 22:40:00.544: INFO: Waiting for pod downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900 to disappear Feb 2 22:40:00.555: INFO: Pod downwardapi-volume-5becde3e-528e-4238-8f7c-8f9ad15b2900 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:40:00.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2032" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":5,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:40:00.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7392, will wait for the garbage collector to delete the pods Feb 2 22:40:07.247: INFO: Deleting Job.batch foo took: 10.455416ms Feb 2 22:40:07.848: INFO: Terminating Job.batch foo pods took: 600.336473ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:40:40.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7392" for this suite. • [SLOW TEST:40.093 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":309,"completed":6,"skipped":127,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:40:40.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-33370ecc-6c94-4864-8cbd-d3afcaeaac70 STEP: Creating a pod to test consume configMaps Feb 2 22:40:40.759: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18" in namespace "configmap-8793" to be "Succeeded or Failed" Feb 2 22:40:40.765: INFO: Pod "pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18": Phase="Pending", Reason="", readiness=false. Elapsed: 5.8158ms Feb 2 22:40:42.769: INFO: Pod "pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009940926s Feb 2 22:40:44.773: INFO: Pod "pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013557559s STEP: Saw pod success Feb 2 22:40:44.773: INFO: Pod "pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18" satisfied condition "Succeeded or Failed" Feb 2 22:40:44.775: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18 container configmap-volume-test: STEP: delete the pod Feb 2 22:40:44.964: INFO: Waiting for pod pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18 to disappear Feb 2 22:40:44.982: INFO: Pod pod-configmaps-cf34272f-98e3-47c6-a75a-63e2ca036f18 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:40:44.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8793" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":7,"skipped":140,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:40:44.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating secret secrets-9653/secret-test-d5a6daf2-d3db-4ec1-b681-13be843f54e8 STEP: Creating a pod to test consume secrets Feb 2 22:40:45.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484" in namespace "secrets-9653" to be "Succeeded or Failed" Feb 2 22:40:45.131: INFO: Pod "pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484": Phase="Pending", Reason="", readiness=false. Elapsed: 70.583524ms Feb 2 22:40:47.136: INFO: Pod "pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075680695s Feb 2 22:40:49.141: INFO: Pod "pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080772179s STEP: Saw pod success Feb 2 22:40:49.141: INFO: Pod "pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484" satisfied condition "Succeeded or Failed" Feb 2 22:40:49.144: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484 container env-test: STEP: delete the pod Feb 2 22:40:49.188: INFO: Waiting for pod pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484 to disappear Feb 2 22:40:49.231: INFO: Pod pod-configmaps-57e25884-08c3-4117-a0ca-4ca35b9f6484 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:40:49.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9653" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":8,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:40:49.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:40:49.347: INFO: Waiting up to 5m0s for pod "downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b" in namespace "projected-5159" to be "Succeeded or Failed" Feb 2 22:40:49.362: INFO: Pod "downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.738013ms Feb 2 22:40:51.366: INFO: Pod "downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018673755s Feb 2 22:40:53.370: INFO: Pod "downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023095862s Feb 2 22:40:55.375: INFO: Pod "downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028375437s STEP: Saw pod success Feb 2 22:40:55.376: INFO: Pod "downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b" satisfied condition "Succeeded or Failed" Feb 2 22:40:55.379: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b container client-container: STEP: delete the pod Feb 2 22:40:55.429: INFO: Waiting for pod downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b to disappear Feb 2 22:40:55.433: INFO: Pod downwardapi-volume-551daf17-2d64-403c-89f6-fd7e8b61769b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:40:55.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5159" for this suite. • [SLOW TEST:6.200 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":9,"skipped":166,"failed":0} SSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:40:55.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:40:55.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9205" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":10,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:40:55.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 2 22:40:55.890: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 2 22:41:00.893: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:41:01.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-517" for this suite. • [SLOW TEST:6.285 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":309,"completed":11,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:41:01.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:41:09.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-785" for this suite. STEP: Destroying namespace "nsdeletetest-1685" for this suite. Feb 2 22:41:09.590: INFO: Namespace nsdeletetest-1685 was already deleted STEP: Destroying namespace "nsdeletetest-3502" for this suite. • [SLOW TEST:7.664 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":309,"completed":12,"skipped":247,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:41:09.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Feb 2 22:41:09.674: INFO: PodSpec: initContainers in spec.initContainers Feb 2 22:42:04.000: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-aef27e71-c5af-4a69-a538-e56c82dc20d8", GenerateName:"", Namespace:"init-container-9472", SelfLink:"", UID:"647082c0-7733-44c2-ba1c-d17c378309fc", ResourceVersion:"4167851", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63747902469, loc:(*time.Location)(0x7962e20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"674315755"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00137e2c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00137e300)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00137e320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00137e580)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5nzpj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00358ed40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5nzpj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5nzpj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5nzpj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0034cfcf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003836930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034cfd80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034cfda0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0034cfda8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0034cfdac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00232a390), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902469, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902469, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902469, loc:(*time.Location)(0x7962e20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902469, loc:(*time.Location)(0x7962e20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.153", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.153"}}, StartTime:(*v1.Time)(0xc00137e5a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00137e840), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003836a10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://81028d37c5b704d54fe1e8f9299794a6435868914a1f175a40d6a580125e81fc", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00137e8e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00137e820), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0034cfe2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:04.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9472" for this suite. • [SLOW TEST:54.467 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":309,"completed":13,"skipped":262,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:04.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 22:42:04.166: INFO: Creating deployment "test-recreate-deployment" Feb 2 22:42:04.184: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 2 22:42:04.257: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 2 22:42:06.265: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 2 22:42:06.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902524, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902524, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902524, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902524, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 22:42:08.273: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 2 22:42:08.283: INFO: Updating deployment test-recreate-deployment Feb 2 22:42:08.283: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 2 22:42:08.940: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9433 0324083f-a3fb-40e2-a3a8-960bf5e896fb 4167909 2 2021-02-02 22:42:04 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-02 22:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-02 22:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037ab688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-02 22:42:08 +0000 UTC,LastTransitionTime:2021-02-02 22:42:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-02-02 22:42:08 +0000 UTC,LastTransitionTime:2021-02-02 22:42:04 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Feb 2 22:42:08.945: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9433 8c4d65fc-1813-4f0c-a888-bf4f69998301 4167907 1 2021-02-02 22:42:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0324083f-a3fb-40e2-a3a8-960bf5e896fb 0xc0037abaf0 0xc0037abaf1}] [] [{kube-controller-manager Update apps/v1 2021-02-02 22:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0324083f-a3fb-40e2-a3a8-960bf5e896fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037abb68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 22:42:08.945: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 2 22:42:08.945: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-9433 0a90d705-a1ae-443f-8ef3-221feccba400 4167897 2 2021-02-02 22:42:04 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0324083f-a3fb-40e2-a3a8-960bf5e896fb 0xc0037ab9e7 0xc0037ab9e8}] [] [{kube-controller-manager Update apps/v1 2021-02-02 22:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0324083f-a3fb-40e2-a3a8-960bf5e896fb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037aba78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 22:42:08.954: INFO: Pod "test-recreate-deployment-f79dd4667-zhvlc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-zhvlc test-recreate-deployment-f79dd4667- deployment-9433 3875792b-aacc-478d-b006-e58cf3aa4448 4167910 0 2021-02-02 22:42:08 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 8c4d65fc-1813-4f0c-a888-bf4f69998301 0xc0037abf60 0xc0037abf61}] [] [{kube-controller-manager Update v1 2021-02-02 22:42:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4d65fc-1813-4f0c-a888-bf4f69998301\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 22:42:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-t5cj2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-t5cj2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-t5cj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 22:42:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 22:42:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 22:42:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 22:42:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 22:42:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:08.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9433" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":14,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:08.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 2 22:42:14.148: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:15.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2683" for this suite. • [SLOW TEST:6.208 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":309,"completed":15,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:15.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 22:42:16.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 create -f -' Feb 2 22:42:17.236: INFO: stderr: "" Feb 2 22:42:17.236: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Feb 2 22:42:17.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 create -f -' Feb 2 22:42:17.744: INFO: stderr: "" Feb 2 22:42:17.744: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Feb 2 22:42:18.780: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 22:42:18.780: INFO: Found 0 / 1 Feb 2 22:42:19.748: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 22:42:19.748: INFO: Found 0 / 1 Feb 2 22:42:20.937: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 22:42:20.937: INFO: Found 0 / 1 Feb 2 22:42:21.748: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 22:42:21.748: INFO: Found 1 / 1 Feb 2 22:42:21.748: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 2 22:42:21.751: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 22:42:21.751: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 2 22:42:21.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 describe pod agnhost-primary-pml6n' Feb 2 22:42:21.867: INFO: stderr: "" Feb 2 22:42:21.867: INFO: stdout: "Name: agnhost-primary-pml6n\nNamespace: kubectl-6036\nPriority: 0\nNode: leguer-worker/172.18.0.13\nStart Time: Tue, 02 Feb 2021 22:42:17 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.157\nIPs:\n IP: 10.244.2.157\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://48c7d5f0ca870d34d313f20e91e53cf0f3b632e211682144e1c4522f9e87f118\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 02 Feb 2021 22:42:20 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-wvjgs (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-wvjgs:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-wvjgs\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6036/agnhost-primary-pml6n to leguer-worker\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Feb 2 22:42:21.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 describe rc agnhost-primary' Feb 2 22:42:21.994: INFO: stderr: "" Feb 2 22:42:21.994: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6036\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-pml6n\n" Feb 2 22:42:21.994: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 describe service agnhost-primary' Feb 2 22:42:22.103: INFO: stderr: "" Feb 2 22:42:22.103: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6036\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.88.50\nIPs: 10.96.88.50\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.157:6379\nSession Affinity: None\nEvents: \n" Feb 2 22:42:22.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 describe node leguer-control-plane' Feb 2 22:42:22.236: INFO: stderr: "" Feb 2 22:42:22.236: INFO: stdout: "Name: leguer-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 17:37:43 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Tue, 02 Feb 2021 22:42:16 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 02 Feb 2021 22:38:45 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 02 Feb 2021 22:38:45 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 02 Feb 2021 22:38:45 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 02 Feb 2021 22:38:45 +0000 Sun, 10 Jan 2021 17:38:11 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.17\n Hostname: leguer-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 5f1cb3b1931a44e6bb33804f4b6ca7e5\n System UUID: c2287e83-2c9f-458f-8294-12965d8d5e30\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.20.0\n Kube-Proxy Version: v1.20.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/leguer/leguer-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-74ff55c5b-flmf7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 23d\n kube-system coredns-74ff55c5b-whxn7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 23d\n kube-system etcd-leguer-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 23d\n kube-system kindnet-rjz52 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 23d\n kube-system kube-apiserver-leguer-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kube-controller-manager-leguer-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kube-proxy-chqjl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23d\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 23d\n local-path-storage local-path-provisioner-78776bfc44-45fhs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (5%) 100m (0%)\n memory 290Mi (0%) 390Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Feb 2 22:42:22.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6036 describe namespace kubectl-6036' Feb 2 22:42:22.327: INFO: stderr: "" Feb 2 22:42:22.327: INFO: stdout: "Name: kubectl-6036\nLabels: e2e-framework=kubectl\n e2e-run=d8acb757-e32d-4ee4-8616-e2f5848ca1dd\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:22.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6036" for this suite. • [SLOW TEST:7.163 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":309,"completed":16,"skipped":382,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:22.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pods Feb 2 22:42:22.436: INFO: created test-pod-1 Feb 2 22:42:22.455: INFO: created test-pod-2 Feb 2 22:42:22.470: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:22.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2291" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":309,"completed":17,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:22.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 2 22:42:22.818: INFO: Waiting up to 5m0s for pod "pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27" in namespace "emptydir-9642" to be "Succeeded or Failed" Feb 2 22:42:22.820: INFO: Pod "pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.471223ms Feb 2 22:42:24.824: INFO: Pod "pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006346892s Feb 2 22:42:26.829: INFO: Pod "pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011061275s STEP: Saw pod success Feb 2 22:42:26.829: INFO: Pod "pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27" satisfied condition "Succeeded or Failed" Feb 2 22:42:26.832: INFO: Trying to get logs from node leguer-worker pod pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27 container test-container: STEP: delete the pod Feb 2 22:42:26.857: INFO: Waiting for pod pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27 to disappear Feb 2 22:42:26.862: INFO: Pod pod-7c89bc4a-5626-4291-b9c7-8f12a771fb27 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:26.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9642" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":18,"skipped":465,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:26.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:42:26.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57" in namespace "downward-api-8276" to be "Succeeded or Failed" Feb 2 22:42:26.996: INFO: Pod "downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57": Phase="Pending", Reason="", readiness=false. Elapsed: 15.331096ms Feb 2 22:42:29.053: INFO: Pod "downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071884874s Feb 2 22:42:31.057: INFO: Pod "downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075731409s STEP: Saw pod success Feb 2 22:42:31.057: INFO: Pod "downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57" satisfied condition "Succeeded or Failed" Feb 2 22:42:31.059: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57 container client-container: STEP: delete the pod Feb 2 22:42:31.250: INFO: Waiting for pod downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57 to disappear Feb 2 22:42:31.322: INFO: Pod downwardapi-volume-9ccce20a-f748-4991-8bc9-1320bacc2a57 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:31.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8276" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":19,"skipped":471,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:31.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Request ServerVersion STEP: Confirm major version Feb 2 22:42:31.409: INFO: Major version: 1 STEP: Confirm minor version Feb 2 22:42:31.409: INFO: cleanMinorVersion: 20 Feb 2 22:42:31.409: INFO: Minor version: 20 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:31.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-5754" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":309,"completed":20,"skipped":484,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:31.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-2887 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 2 22:42:31.514: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 2 22:42:31.625: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:42:33.630: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 22:42:35.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:37.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:39.629: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:41.629: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:43.629: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:45.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:47.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:49.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:51.630: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 22:42:53.630: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 2 22:42:53.637: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 2 22:42:57.675: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 2 22:42:57.675: INFO: Breadth first check of 10.244.2.160 on host 172.18.0.13... Feb 2 22:42:57.678: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.162:9080/dial?request=hostname&protocol=udp&host=10.244.2.160&port=8081&tries=1'] Namespace:pod-network-test-2887 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 22:42:57.678: INFO: >>> kubeConfig: /root/.kube/config I0202 22:42:57.722820 7 log.go:181] (0xc000816dc0) (0xc002efc5a0) Create stream I0202 22:42:57.722856 7 log.go:181] (0xc000816dc0) (0xc002efc5a0) Stream added, broadcasting: 1 I0202 22:42:57.726499 7 log.go:181] (0xc000816dc0) Reply frame received for 1 I0202 22:42:57.726604 7 log.go:181] (0xc000816dc0) (0xc00308c460) Create stream I0202 22:42:57.726625 7 log.go:181] (0xc000816dc0) (0xc00308c460) Stream added, broadcasting: 3 I0202 22:42:57.727741 7 log.go:181] (0xc000816dc0) Reply frame received for 3 I0202 22:42:57.727795 7 log.go:181] (0xc000816dc0) (0xc0032381e0) Create stream I0202 22:42:57.727823 7 log.go:181] (0xc000816dc0) (0xc0032381e0) Stream added, broadcasting: 5 I0202 22:42:57.728939 7 log.go:181] (0xc000816dc0) Reply frame received for 5 I0202 22:42:57.800269 7 log.go:181] (0xc000816dc0) Data frame received for 3 I0202 22:42:57.800304 7 log.go:181] (0xc00308c460) (3) Data frame handling I0202 22:42:57.800319 7 log.go:181] (0xc00308c460) (3) Data frame sent I0202 22:42:57.800668 7 log.go:181] (0xc000816dc0) Data frame received for 3 I0202 22:42:57.800704 7 log.go:181] (0xc00308c460) (3) Data frame handling I0202 22:42:57.800740 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 22:42:57.800756 7 log.go:181] (0xc0032381e0) (5) Data frame handling I0202 22:42:57.802321 7 log.go:181] (0xc000816dc0) Data frame received for 1 I0202 22:42:57.802365 7 log.go:181] (0xc002efc5a0) (1) Data frame handling I0202 22:42:57.802386 7 log.go:181] (0xc002efc5a0) (1) Data frame sent I0202 22:42:57.802458 7 log.go:181] (0xc000816dc0) (0xc002efc5a0) Stream removed, broadcasting: 1 I0202 22:42:57.802488 7 log.go:181] (0xc000816dc0) Go away received I0202 22:42:57.802905 7 log.go:181] (0xc000816dc0) (0xc002efc5a0) Stream removed, broadcasting: 1 I0202 22:42:57.802934 7 log.go:181] (0xc000816dc0) (0xc00308c460) Stream removed, broadcasting: 3 I0202 22:42:57.802948 7 log.go:181] (0xc000816dc0) (0xc0032381e0) Stream removed, broadcasting: 5 Feb 2 22:42:57.802: INFO: Waiting for responses: map[] Feb 2 22:42:57.803: INFO: reached 10.244.2.160 after 0/1 tries Feb 2 22:42:57.803: INFO: Breadth first check of 10.244.1.161 on host 172.18.0.12... Feb 2 22:42:57.806: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.162:9080/dial?request=hostname&protocol=udp&host=10.244.1.161&port=8081&tries=1'] Namespace:pod-network-test-2887 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 22:42:57.806: INFO: >>> kubeConfig: /root/.kube/config I0202 22:42:57.834141 7 log.go:181] (0xc00229e0b0) (0xc001a7c960) Create stream I0202 22:42:57.834163 7 log.go:181] (0xc00229e0b0) (0xc001a7c960) Stream added, broadcasting: 1 I0202 22:42:57.836528 7 log.go:181] (0xc00229e0b0) Reply frame received for 1 I0202 22:42:57.836585 7 log.go:181] (0xc00229e0b0) (0xc00308c500) Create stream I0202 22:42:57.836597 7 log.go:181] (0xc00229e0b0) (0xc00308c500) Stream added, broadcasting: 3 I0202 22:42:57.837744 7 log.go:181] (0xc00229e0b0) Reply frame received for 3 I0202 22:42:57.837779 7 log.go:181] (0xc00229e0b0) (0xc002efc6e0) Create stream I0202 22:42:57.837790 7 log.go:181] (0xc00229e0b0) (0xc002efc6e0) Stream added, broadcasting: 5 I0202 22:42:57.838693 7 log.go:181] (0xc00229e0b0) Reply frame received for 5 I0202 22:42:57.907701 7 log.go:181] (0xc00229e0b0) Data frame received for 3 I0202 22:42:57.907734 7 log.go:181] (0xc00308c500) (3) Data frame handling I0202 22:42:57.907768 7 log.go:181] (0xc00308c500) (3) Data frame sent I0202 22:42:57.908564 7 log.go:181] (0xc00229e0b0) Data frame received for 3 I0202 22:42:57.908588 7 log.go:181] (0xc00308c500) (3) Data frame handling I0202 22:42:57.908638 7 log.go:181] (0xc00229e0b0) Data frame received for 5 I0202 22:42:57.908683 7 log.go:181] (0xc002efc6e0) (5) Data frame handling I0202 22:42:57.910609 7 log.go:181] (0xc00229e0b0) Data frame received for 1 I0202 22:42:57.910645 7 log.go:181] (0xc001a7c960) (1) Data frame handling I0202 22:42:57.910692 7 log.go:181] (0xc001a7c960) (1) Data frame sent I0202 22:42:57.910731 7 log.go:181] (0xc00229e0b0) (0xc001a7c960) Stream removed, broadcasting: 1 I0202 22:42:57.910772 7 log.go:181] (0xc00229e0b0) Go away received I0202 22:42:57.910823 7 log.go:181] (0xc00229e0b0) (0xc001a7c960) Stream removed, broadcasting: 1 I0202 22:42:57.910843 7 log.go:181] (0xc00229e0b0) (0xc00308c500) Stream removed, broadcasting: 3 I0202 22:42:57.910855 7 log.go:181] (0xc00229e0b0) (0xc002efc6e0) Stream removed, broadcasting: 5 Feb 2 22:42:57.910: INFO: Waiting for responses: map[] Feb 2 22:42:57.910: INFO: reached 10.244.1.161 after 0/1 tries Feb 2 22:42:57.910: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:42:57.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2887" for this suite. • [SLOW TEST:26.499 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":309,"completed":21,"skipped":485,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:42:57.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6235.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6235.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6235.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6235.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6235.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6235.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 22:43:06.209: INFO: DNS probes using dns-6235/dns-test-bca255a2-dbc0-413c-8196-6e81c5a6215c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:43:06.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6235" for this suite. • [SLOW TEST:8.408 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":309,"completed":22,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:43:06.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 22:43:07.916: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 22:43:10.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 22:43:12.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902587, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 22:43:15.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 22:43:15.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1537-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:43:16.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3761" for this suite. STEP: Destroying namespace "webhook-3761-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.753 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":309,"completed":23,"skipped":528,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:43:17.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Feb 2 22:43:17.224: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:43:35.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7477" for this suite. • [SLOW TEST:18.376 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":309,"completed":24,"skipped":546,"failed":0} [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:43:35.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:43:35.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab" in namespace "projected-2666" to be "Succeeded or Failed" Feb 2 22:43:35.620: INFO: Pod "downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 58.732721ms Feb 2 22:43:37.623: INFO: Pod "downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062176833s Feb 2 22:43:39.627: INFO: Pod "downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065759269s STEP: Saw pod success Feb 2 22:43:39.627: INFO: Pod "downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab" satisfied condition "Succeeded or Failed" Feb 2 22:43:39.630: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab container client-container: STEP: delete the pod Feb 2 22:43:39.763: INFO: Waiting for pod downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab to disappear Feb 2 22:43:39.771: INFO: Pod downwardapi-volume-d518d6d1-fe7d-452f-8e74-60874be6d6ab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:43:39.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2666" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":25,"skipped":546,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:43:39.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Feb 2 22:43:44.411: INFO: Successfully updated pod "labelsupdate32cd1483-3124-4dd8-b494-ad0f9bbb246f" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:43:48.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4581" for this suite. • [SLOW TEST:8.668 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":26,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:43:48.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 22:43:49.336: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Feb 2 22:43:51.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902629, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902629, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902629, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902629, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 22:43:54.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:44:04.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8028" for this suite. STEP: Destroying namespace "webhook-8028-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.330 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":309,"completed":27,"skipped":569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:44:04.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:44:08.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5189" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":309,"completed":28,"skipped":603,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:44:08.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create deployment with httpd image Feb 2 22:44:08.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8158 create -f -' Feb 2 22:44:12.451: INFO: stderr: "" Feb 2 22:44:12.451: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Feb 2 22:44:12.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8158 diff -f -' Feb 2 22:44:12.997: INFO: rc: 1 Feb 2 22:44:12.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8158 delete -f -' Feb 2 22:44:13.143: INFO: stderr: "" Feb 2 22:44:13.143: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:44:13.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8158" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":309,"completed":29,"skipped":620,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:44:13.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:44:13.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551" in namespace "downward-api-8229" to be "Succeeded or Failed" Feb 2 22:44:13.322: INFO: Pod "downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551": Phase="Pending", Reason="", readiness=false. Elapsed: 13.915295ms Feb 2 22:44:15.324: INFO: Pod "downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016407791s Feb 2 22:44:17.491: INFO: Pod "downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551": Phase="Running", Reason="", readiness=true. Elapsed: 4.183411678s Feb 2 22:44:19.496: INFO: Pod "downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188689403s STEP: Saw pod success Feb 2 22:44:19.496: INFO: Pod "downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551" satisfied condition "Succeeded or Failed" Feb 2 22:44:19.500: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551 container client-container: STEP: delete the pod Feb 2 22:44:19.562: INFO: Waiting for pod downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551 to disappear Feb 2 22:44:19.580: INFO: Pod downwardapi-volume-dfddf947-e857-4650-952a-082c9184b551 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:44:19.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8229" for this suite. • [SLOW TEST:6.391 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":30,"skipped":631,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:44:19.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3932 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Feb 2 22:44:19.694: INFO: Found 0 stateful pods, waiting for 3 Feb 2 22:44:29.712: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:44:29.712: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:44:29.712: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 2 22:44:39.699: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:44:39.699: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:44:39.699: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 2 22:44:39.732: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 2 22:44:49.825: INFO: Updating stateful set ss2 Feb 2 22:44:49.863: INFO: Waiting for Pod statefulset-3932/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:44:59.914: INFO: Waiting for Pod statefulset-3932/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 2 22:45:10.485: INFO: Found 2 stateful pods, waiting for 3 Feb 2 22:45:20.491: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:45:20.491: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 22:45:20.491: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 2 22:45:20.517: INFO: Updating stateful set ss2 Feb 2 22:45:20.541: INFO: Waiting for Pod statefulset-3932/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:45:30.550: INFO: Waiting for Pod statefulset-3932/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:45:40.548: INFO: Waiting for Pod statefulset-3932/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:45:50.568: INFO: Updating stateful set ss2 Feb 2 22:45:50.624: INFO: Waiting for StatefulSet statefulset-3932/ss2 to complete update Feb 2 22:45:50.624: INFO: Waiting for Pod statefulset-3932/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 2 22:46:00.631: INFO: Waiting for StatefulSet statefulset-3932/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 2 22:46:10.631: INFO: Deleting all statefulset in ns statefulset-3932 Feb 2 22:46:10.634: INFO: Scaling statefulset ss2 to 0 Feb 2 22:47:00.674: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 22:47:00.678: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:00.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3932" for this suite. • [SLOW TEST:161.131 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":309,"completed":31,"skipped":636,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:00.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override arguments Feb 2 22:47:00.796: INFO: Waiting up to 5m0s for pod "client-containers-bfff4883-b243-40fb-94da-fa6b390594b1" in namespace "containers-6405" to be "Succeeded or Failed" Feb 2 22:47:00.799: INFO: Pod "client-containers-bfff4883-b243-40fb-94da-fa6b390594b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.311285ms Feb 2 22:47:02.803: INFO: Pod "client-containers-bfff4883-b243-40fb-94da-fa6b390594b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007394028s Feb 2 22:47:04.808: INFO: Pod "client-containers-bfff4883-b243-40fb-94da-fa6b390594b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012185636s STEP: Saw pod success Feb 2 22:47:04.808: INFO: Pod "client-containers-bfff4883-b243-40fb-94da-fa6b390594b1" satisfied condition "Succeeded or Failed" Feb 2 22:47:04.816: INFO: Trying to get logs from node leguer-worker pod client-containers-bfff4883-b243-40fb-94da-fa6b390594b1 container agnhost-container: STEP: delete the pod Feb 2 22:47:04.849: INFO: Waiting for pod client-containers-bfff4883-b243-40fb-94da-fa6b390594b1 to disappear Feb 2 22:47:04.853: INFO: Pod client-containers-bfff4883-b243-40fb-94da-fa6b390594b1 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:04.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6405" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":309,"completed":32,"skipped":643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:04.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-c3717161-31b1-46ba-bb76-3c257e7bee31 STEP: Creating secret with name s-test-opt-upd-c6410630-2aba-4b01-8a8d-8044c072a177 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c3717161-31b1-46ba-bb76-3c257e7bee31 STEP: Updating secret s-test-opt-upd-c6410630-2aba-4b01-8a8d-8044c072a177 STEP: Creating secret with name s-test-opt-create-346ce305-3d29-4cb6-acb8-ea9d123b4fd7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:15.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2142" for this suite. • [SLOW TEST:10.594 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":33,"skipped":670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:15.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 22:47:16.708: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 22:47:19.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902836, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902836, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902837, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902836, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 22:47:21.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902836, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902836, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902837, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747902836, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 22:47:24.236: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:24.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1971" for this suite. STEP: Destroying namespace "webhook-1971-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.921 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":309,"completed":34,"skipped":698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:24.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 22:47:24.568: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4988 I0202 22:47:24.601449 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4988, replica count: 1 I0202 22:47:25.651988 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 22:47:26.652256 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 22:47:27.652485 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 22:47:28.652738 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 22:47:28.815: INFO: Created: latency-svc-9lftb Feb 2 22:47:28.849: INFO: Got endpoints: latency-svc-9lftb [96.304228ms] Feb 2 22:47:28.890: INFO: Created: latency-svc-tjc8n Feb 2 22:47:28.906: INFO: Got endpoints: latency-svc-tjc8n [56.731631ms] Feb 2 22:47:28.964: INFO: Created: latency-svc-z6jw5 Feb 2 22:47:29.005: INFO: Got endpoints: latency-svc-z6jw5 [155.49329ms] Feb 2 22:47:29.035: INFO: Created: latency-svc-rhsjz Feb 2 22:47:29.084: INFO: Got endpoints: latency-svc-rhsjz [234.825956ms] Feb 2 22:47:29.125: INFO: Created: latency-svc-jvpwd Feb 2 22:47:29.137: INFO: Got endpoints: latency-svc-jvpwd [287.680109ms] Feb 2 22:47:29.222: INFO: Created: latency-svc-52wvh Feb 2 22:47:29.258: INFO: Created: latency-svc-kztkr Feb 2 22:47:29.258: INFO: Got endpoints: latency-svc-52wvh [408.251237ms] Feb 2 22:47:29.292: INFO: Got endpoints: latency-svc-kztkr [442.057167ms] Feb 2 22:47:29.354: INFO: Created: latency-svc-v2rrm Feb 2 22:47:29.370: INFO: Got endpoints: latency-svc-v2rrm [520.436234ms] Feb 2 22:47:29.402: INFO: Created: latency-svc-l2cgg Feb 2 22:47:29.417: INFO: Got endpoints: latency-svc-l2cgg [567.798261ms] Feb 2 22:47:29.450: INFO: Created: latency-svc-drb8c Feb 2 22:47:29.473: INFO: Got endpoints: latency-svc-drb8c [623.826239ms] Feb 2 22:47:29.491: INFO: Created: latency-svc-ttzbb Feb 2 22:47:29.538: INFO: Got endpoints: latency-svc-ttzbb [688.638934ms] Feb 2 22:47:29.612: INFO: Created: latency-svc-5t87g Feb 2 22:47:29.665: INFO: Got endpoints: latency-svc-5t87g [816.001883ms] Feb 2 22:47:29.743: INFO: Created: latency-svc-plnrq Feb 2 22:47:29.777: INFO: Got endpoints: latency-svc-plnrq [927.106622ms] Feb 2 22:47:29.876: INFO: Created: latency-svc-hfg76 Feb 2 22:47:29.885: INFO: Got endpoints: latency-svc-hfg76 [1.03583666s] Feb 2 22:47:29.946: INFO: Created: latency-svc-s898m Feb 2 22:47:29.963: INFO: Got endpoints: latency-svc-s898m [1.113329968s] Feb 2 22:47:30.001: INFO: Created: latency-svc-tph72 Feb 2 22:47:30.024: INFO: Got endpoints: latency-svc-tph72 [1.174335846s] Feb 2 22:47:30.055: INFO: Created: latency-svc-b4jdx Feb 2 22:47:30.071: INFO: Got endpoints: latency-svc-b4jdx [1.164955222s] Feb 2 22:47:30.180: INFO: Created: latency-svc-llqxm Feb 2 22:47:30.204: INFO: Got endpoints: latency-svc-llqxm [1.198528899s] Feb 2 22:47:30.241: INFO: Created: latency-svc-k6854 Feb 2 22:47:30.300: INFO: Got endpoints: latency-svc-k6854 [1.215592562s] Feb 2 22:47:30.332: INFO: Created: latency-svc-sd8vk Feb 2 22:47:30.340: INFO: Got endpoints: latency-svc-sd8vk [1.202721924s] Feb 2 22:47:30.391: INFO: Created: latency-svc-4dg9b Feb 2 22:47:30.419: INFO: Got endpoints: latency-svc-4dg9b [1.160934381s] Feb 2 22:47:30.438: INFO: Created: latency-svc-gfrbt Feb 2 22:47:30.469: INFO: Got endpoints: latency-svc-gfrbt [1.177027715s] Feb 2 22:47:30.494: INFO: Created: latency-svc-hgz4z Feb 2 22:47:30.514: INFO: Got endpoints: latency-svc-hgz4z [1.14421809s] Feb 2 22:47:30.557: INFO: Created: latency-svc-j2x4x Feb 2 22:47:30.582: INFO: Got endpoints: latency-svc-j2x4x [1.16484077s] Feb 2 22:47:30.583: INFO: Created: latency-svc-bgnjq Feb 2 22:47:30.600: INFO: Got endpoints: latency-svc-bgnjq [1.126451202s] Feb 2 22:47:30.624: INFO: Created: latency-svc-snt47 Feb 2 22:47:30.649: INFO: Got endpoints: latency-svc-snt47 [1.110588348s] Feb 2 22:47:30.715: INFO: Created: latency-svc-k6bcb Feb 2 22:47:30.724: INFO: Got endpoints: latency-svc-k6bcb [1.05887823s] Feb 2 22:47:30.768: INFO: Created: latency-svc-nprpz Feb 2 22:47:30.778: INFO: Got endpoints: latency-svc-nprpz [1.000495351s] Feb 2 22:47:30.815: INFO: Created: latency-svc-27zwk Feb 2 22:47:30.820: INFO: Got endpoints: latency-svc-27zwk [934.822323ms] Feb 2 22:47:30.854: INFO: Created: latency-svc-cq622 Feb 2 22:47:30.887: INFO: Got endpoints: latency-svc-cq622 [923.540985ms] Feb 2 22:47:30.941: INFO: Created: latency-svc-88z2x Feb 2 22:47:30.966: INFO: Got endpoints: latency-svc-88z2x [941.496376ms] Feb 2 22:47:30.966: INFO: Created: latency-svc-zdkf6 Feb 2 22:47:30.996: INFO: Got endpoints: latency-svc-zdkf6 [924.655002ms] Feb 2 22:47:31.085: INFO: Created: latency-svc-bfhzb Feb 2 22:47:31.106: INFO: Got endpoints: latency-svc-bfhzb [902.078832ms] Feb 2 22:47:31.106: INFO: Created: latency-svc-5bvmw Feb 2 22:47:31.129: INFO: Got endpoints: latency-svc-5bvmw [829.159678ms] Feb 2 22:47:31.164: INFO: Created: latency-svc-nmdzf Feb 2 22:47:31.204: INFO: Got endpoints: latency-svc-nmdzf [863.796131ms] Feb 2 22:47:31.262: INFO: Created: latency-svc-d8xfx Feb 2 22:47:31.288: INFO: Got endpoints: latency-svc-d8xfx [868.603751ms] Feb 2 22:47:31.362: INFO: Created: latency-svc-x2z6w Feb 2 22:47:31.386: INFO: Got endpoints: latency-svc-x2z6w [916.718769ms] Feb 2 22:47:31.387: INFO: Created: latency-svc-qqpm2 Feb 2 22:47:31.410: INFO: Got endpoints: latency-svc-qqpm2 [895.54258ms] Feb 2 22:47:31.434: INFO: Created: latency-svc-rcr2v Feb 2 22:47:31.443: INFO: Got endpoints: latency-svc-rcr2v [860.025414ms] Feb 2 22:47:31.514: INFO: Created: latency-svc-5qkgn Feb 2 22:47:31.527: INFO: Got endpoints: latency-svc-5qkgn [926.643353ms] Feb 2 22:47:31.567: INFO: Created: latency-svc-wstkq Feb 2 22:47:31.641: INFO: Got endpoints: latency-svc-wstkq [992.195985ms] Feb 2 22:47:31.674: INFO: Created: latency-svc-jnzdd Feb 2 22:47:31.724: INFO: Got endpoints: latency-svc-jnzdd [999.236472ms] Feb 2 22:47:31.789: INFO: Created: latency-svc-kxxgs Feb 2 22:47:31.801: INFO: Got endpoints: latency-svc-kxxgs [1.023585629s] Feb 2 22:47:31.854: INFO: Created: latency-svc-j6k46 Feb 2 22:47:31.867: INFO: Got endpoints: latency-svc-j6k46 [1.046971675s] Feb 2 22:47:31.921: INFO: Created: latency-svc-65z74 Feb 2 22:47:31.933: INFO: Got endpoints: latency-svc-65z74 [1.046547061s] Feb 2 22:47:31.958: INFO: Created: latency-svc-n74pr Feb 2 22:47:31.970: INFO: Got endpoints: latency-svc-n74pr [1.003754636s] Feb 2 22:47:31.992: INFO: Created: latency-svc-d6mmj Feb 2 22:47:32.005: INFO: Got endpoints: latency-svc-d6mmj [1.009425314s] Feb 2 22:47:32.083: INFO: Created: latency-svc-695tv Feb 2 22:47:32.109: INFO: Got endpoints: latency-svc-695tv [1.003336707s] Feb 2 22:47:32.167: INFO: Created: latency-svc-f6fvg Feb 2 22:47:32.180: INFO: Got endpoints: latency-svc-f6fvg [1.050823513s] Feb 2 22:47:32.258: INFO: Created: latency-svc-vrt9q Feb 2 22:47:32.288: INFO: Got endpoints: latency-svc-vrt9q [1.084452825s] Feb 2 22:47:32.305: INFO: Created: latency-svc-cvlt6 Feb 2 22:47:32.328: INFO: Got endpoints: latency-svc-cvlt6 [1.04046224s] Feb 2 22:47:32.365: INFO: Created: latency-svc-gsfjj Feb 2 22:47:32.378: INFO: Got endpoints: latency-svc-gsfjj [991.45939ms] Feb 2 22:47:32.444: INFO: Created: latency-svc-5qxj5 Feb 2 22:47:32.455: INFO: Got endpoints: latency-svc-5qxj5 [1.045321034s] Feb 2 22:47:32.508: INFO: Created: latency-svc-rv9g2 Feb 2 22:47:32.551: INFO: Got endpoints: latency-svc-rv9g2 [1.108801559s] Feb 2 22:47:32.569: INFO: Created: latency-svc-xkh26 Feb 2 22:47:32.594: INFO: Got endpoints: latency-svc-xkh26 [1.066830489s] Feb 2 22:47:32.624: INFO: Created: latency-svc-2ptq6 Feb 2 22:47:32.634: INFO: Got endpoints: latency-svc-2ptq6 [992.65475ms] Feb 2 22:47:32.647: INFO: Created: latency-svc-gmwp2 Feb 2 22:47:32.671: INFO: Got endpoints: latency-svc-gmwp2 [947.62602ms] Feb 2 22:47:32.682: INFO: Created: latency-svc-vxqjx Feb 2 22:47:32.694: INFO: Got endpoints: latency-svc-vxqjx [892.898614ms] Feb 2 22:47:32.719: INFO: Created: latency-svc-ctxzq Feb 2 22:47:32.743: INFO: Got endpoints: latency-svc-ctxzq [875.457162ms] Feb 2 22:47:32.810: INFO: Created: latency-svc-kd6lm Feb 2 22:47:32.827: INFO: Got endpoints: latency-svc-kd6lm [893.919211ms] Feb 2 22:47:32.881: INFO: Created: latency-svc-pp6cw Feb 2 22:47:32.916: INFO: Got endpoints: latency-svc-pp6cw [946.228045ms] Feb 2 22:47:32.952: INFO: Created: latency-svc-cvm4p Feb 2 22:47:32.964: INFO: Got endpoints: latency-svc-cvm4p [958.915735ms] Feb 2 22:47:32.988: INFO: Created: latency-svc-6p7fs Feb 2 22:47:33.054: INFO: Got endpoints: latency-svc-6p7fs [944.757137ms] Feb 2 22:47:33.074: INFO: Created: latency-svc-64dwb Feb 2 22:47:33.102: INFO: Got endpoints: latency-svc-64dwb [922.437852ms] Feb 2 22:47:33.126: INFO: Created: latency-svc-scwbj Feb 2 22:47:33.186: INFO: Got endpoints: latency-svc-scwbj [897.916682ms] Feb 2 22:47:33.206: INFO: Created: latency-svc-2h4rx Feb 2 22:47:33.221: INFO: Got endpoints: latency-svc-2h4rx [893.395545ms] Feb 2 22:47:33.266: INFO: Created: latency-svc-z2fql Feb 2 22:47:33.342: INFO: Got endpoints: latency-svc-z2fql [964.463694ms] Feb 2 22:47:33.360: INFO: Created: latency-svc-4dbhr Feb 2 22:47:33.377: INFO: Got endpoints: latency-svc-4dbhr [922.118759ms] Feb 2 22:47:33.410: INFO: Created: latency-svc-m49jj Feb 2 22:47:33.426: INFO: Got endpoints: latency-svc-m49jj [874.295406ms] Feb 2 22:47:33.439: INFO: Created: latency-svc-vnwxh Feb 2 22:47:33.462: INFO: Got endpoints: latency-svc-vnwxh [867.889163ms] Feb 2 22:47:33.512: INFO: Created: latency-svc-qbk66 Feb 2 22:47:33.528: INFO: Got endpoints: latency-svc-qbk66 [893.565788ms] Feb 2 22:47:33.558: INFO: Created: latency-svc-qr6dv Feb 2 22:47:33.606: INFO: Got endpoints: latency-svc-qr6dv [934.425289ms] Feb 2 22:47:33.674: INFO: Created: latency-svc-vbv7t Feb 2 22:47:33.689: INFO: Got endpoints: latency-svc-vbv7t [994.907097ms] Feb 2 22:47:33.727: INFO: Created: latency-svc-r6c6w Feb 2 22:47:33.743: INFO: Got endpoints: latency-svc-r6c6w [1.000355734s] Feb 2 22:47:33.859: INFO: Created: latency-svc-dgrs6 Feb 2 22:47:33.875: INFO: Got endpoints: latency-svc-dgrs6 [1.047849826s] Feb 2 22:47:33.936: INFO: Created: latency-svc-hmrsd Feb 2 22:47:33.979: INFO: Got endpoints: latency-svc-hmrsd [1.06265133s] Feb 2 22:47:34.009: INFO: Created: latency-svc-l78b8 Feb 2 22:47:34.024: INFO: Got endpoints: latency-svc-l78b8 [1.059407697s] Feb 2 22:47:34.115: INFO: Created: latency-svc-vv8j5 Feb 2 22:47:34.120: INFO: Got endpoints: latency-svc-vv8j5 [1.066477457s] Feb 2 22:47:34.165: INFO: Created: latency-svc-p7x25 Feb 2 22:47:34.180: INFO: Got endpoints: latency-svc-p7x25 [1.077434423s] Feb 2 22:47:34.273: INFO: Created: latency-svc-mb5l7 Feb 2 22:47:34.288: INFO: Got endpoints: latency-svc-mb5l7 [1.101577155s] Feb 2 22:47:34.309: INFO: Created: latency-svc-ksd5q Feb 2 22:47:34.327: INFO: Got endpoints: latency-svc-ksd5q [1.105563325s] Feb 2 22:47:34.390: INFO: Created: latency-svc-79gjr Feb 2 22:47:34.411: INFO: Created: latency-svc-pdslc Feb 2 22:47:34.411: INFO: Got endpoints: latency-svc-79gjr [1.068687119s] Feb 2 22:47:34.428: INFO: Got endpoints: latency-svc-pdslc [1.050591012s] Feb 2 22:47:34.447: INFO: Created: latency-svc-7rwns Feb 2 22:47:34.463: INFO: Got endpoints: latency-svc-7rwns [1.036681678s] Feb 2 22:47:34.483: INFO: Created: latency-svc-rqf6n Feb 2 22:47:34.534: INFO: Got endpoints: latency-svc-rqf6n [1.072026615s] Feb 2 22:47:34.555: INFO: Created: latency-svc-6gtkt Feb 2 22:47:34.571: INFO: Got endpoints: latency-svc-6gtkt [1.043126119s] Feb 2 22:47:34.584: INFO: Created: latency-svc-g7xz2 Feb 2 22:47:34.612: INFO: Got endpoints: latency-svc-g7xz2 [1.006490904s] Feb 2 22:47:34.672: INFO: Created: latency-svc-gsszk Feb 2 22:47:34.694: INFO: Got endpoints: latency-svc-gsszk [1.004082844s] Feb 2 22:47:34.694: INFO: Created: latency-svc-q2w7r Feb 2 22:47:34.718: INFO: Got endpoints: latency-svc-q2w7r [974.508226ms] Feb 2 22:47:34.741: INFO: Created: latency-svc-pl7h7 Feb 2 22:47:34.852: INFO: Got endpoints: latency-svc-pl7h7 [976.82087ms] Feb 2 22:47:34.856: INFO: Created: latency-svc-d6sqm Feb 2 22:47:34.863: INFO: Got endpoints: latency-svc-d6sqm [884.280575ms] Feb 2 22:47:34.897: INFO: Created: latency-svc-gfvjw Feb 2 22:47:34.911: INFO: Got endpoints: latency-svc-gfvjw [886.894894ms] Feb 2 22:47:34.994: INFO: Created: latency-svc-7lfkb Feb 2 22:47:35.028: INFO: Created: latency-svc-6dl5z Feb 2 22:47:35.029: INFO: Got endpoints: latency-svc-7lfkb [908.067873ms] Feb 2 22:47:35.052: INFO: Got endpoints: latency-svc-6dl5z [872.542968ms] Feb 2 22:47:35.070: INFO: Created: latency-svc-6cq2g Feb 2 22:47:35.088: INFO: Got endpoints: latency-svc-6cq2g [799.849881ms] Feb 2 22:47:35.132: INFO: Created: latency-svc-zc2z4 Feb 2 22:47:35.140: INFO: Got endpoints: latency-svc-zc2z4 [812.674234ms] Feb 2 22:47:35.197: INFO: Created: latency-svc-bwg7p Feb 2 22:47:35.218: INFO: Got endpoints: latency-svc-bwg7p [807.098886ms] Feb 2 22:47:35.262: INFO: Created: latency-svc-4whzm Feb 2 22:47:35.293: INFO: Created: latency-svc-gkqqz Feb 2 22:47:35.294: INFO: Got endpoints: latency-svc-4whzm [865.355037ms] Feb 2 22:47:35.329: INFO: Got endpoints: latency-svc-gkqqz [866.523218ms] Feb 2 22:47:35.378: INFO: Created: latency-svc-zqg8x Feb 2 22:47:35.400: INFO: Got endpoints: latency-svc-zqg8x [866.365527ms] Feb 2 22:47:35.400: INFO: Created: latency-svc-g2fpf Feb 2 22:47:35.430: INFO: Got endpoints: latency-svc-g2fpf [858.580252ms] Feb 2 22:47:35.460: INFO: Created: latency-svc-jghnv Feb 2 22:47:35.474: INFO: Got endpoints: latency-svc-jghnv [861.694442ms] Feb 2 22:47:35.515: INFO: Created: latency-svc-lbnhw Feb 2 22:47:35.534: INFO: Got endpoints: latency-svc-lbnhw [839.688249ms] Feb 2 22:47:35.534: INFO: Created: latency-svc-5pm9f Feb 2 22:47:35.569: INFO: Got endpoints: latency-svc-5pm9f [851.429863ms] Feb 2 22:47:35.598: INFO: Created: latency-svc-v5b8t Feb 2 22:47:35.613: INFO: Got endpoints: latency-svc-v5b8t [761.294128ms] Feb 2 22:47:35.665: INFO: Created: latency-svc-nmtwh Feb 2 22:47:35.673: INFO: Got endpoints: latency-svc-nmtwh [810.122065ms] Feb 2 22:47:35.695: INFO: Created: latency-svc-x8xgz Feb 2 22:47:35.709: INFO: Got endpoints: latency-svc-x8xgz [797.90054ms] Feb 2 22:47:35.743: INFO: Created: latency-svc-7b8jq Feb 2 22:47:35.757: INFO: Got endpoints: latency-svc-7b8jq [728.477958ms] Feb 2 22:47:35.822: INFO: Created: latency-svc-kjpn4 Feb 2 22:47:35.829: INFO: Got endpoints: latency-svc-kjpn4 [776.962101ms] Feb 2 22:47:35.899: INFO: Created: latency-svc-ddhls Feb 2 22:47:35.913: INFO: Got endpoints: latency-svc-ddhls [824.587533ms] Feb 2 22:47:35.953: INFO: Created: latency-svc-znllc Feb 2 22:47:35.965: INFO: Got endpoints: latency-svc-znllc [825.395569ms] Feb 2 22:47:36.000: INFO: Created: latency-svc-kk2q9 Feb 2 22:47:36.014: INFO: Got endpoints: latency-svc-kk2q9 [795.692249ms] Feb 2 22:47:36.043: INFO: Created: latency-svc-vjghg Feb 2 22:47:36.240: INFO: Got endpoints: latency-svc-vjghg [946.678225ms] Feb 2 22:47:36.242: INFO: Created: latency-svc-gxsqp Feb 2 22:47:36.253: INFO: Got endpoints: latency-svc-gxsqp [923.365269ms] Feb 2 22:47:36.295: INFO: Created: latency-svc-md5p7 Feb 2 22:47:36.307: INFO: Got endpoints: latency-svc-md5p7 [907.091948ms] Feb 2 22:47:36.330: INFO: Created: latency-svc-trf82 Feb 2 22:47:36.390: INFO: Got endpoints: latency-svc-trf82 [960.046487ms] Feb 2 22:47:36.416: INFO: Created: latency-svc-54b2w Feb 2 22:47:36.427: INFO: Got endpoints: latency-svc-54b2w [953.253878ms] Feb 2 22:47:36.445: INFO: Created: latency-svc-v8b2t Feb 2 22:47:36.458: INFO: Got endpoints: latency-svc-v8b2t [924.054549ms] Feb 2 22:47:36.480: INFO: Created: latency-svc-l28tx Feb 2 22:47:36.546: INFO: Got endpoints: latency-svc-l28tx [976.272834ms] Feb 2 22:47:36.547: INFO: Created: latency-svc-qfnf5 Feb 2 22:47:36.571: INFO: Got endpoints: latency-svc-qfnf5 [957.564408ms] Feb 2 22:47:36.595: INFO: Created: latency-svc-hw4qg Feb 2 22:47:36.631: INFO: Got endpoints: latency-svc-hw4qg [958.342599ms] Feb 2 22:47:36.684: INFO: Created: latency-svc-5cgg8 Feb 2 22:47:36.702: INFO: Got endpoints: latency-svc-5cgg8 [993.055739ms] Feb 2 22:47:36.703: INFO: Created: latency-svc-gbw8d Feb 2 22:47:36.738: INFO: Got endpoints: latency-svc-gbw8d [980.994105ms] Feb 2 22:47:36.763: INFO: Created: latency-svc-f758x Feb 2 22:47:36.780: INFO: Got endpoints: latency-svc-f758x [950.780336ms] Feb 2 22:47:36.825: INFO: Created: latency-svc-fppj6 Feb 2 22:47:36.859: INFO: Got endpoints: latency-svc-fppj6 [946.311121ms] Feb 2 22:47:36.888: INFO: Created: latency-svc-xbtmz Feb 2 22:47:36.910: INFO: Got endpoints: latency-svc-xbtmz [944.933199ms] Feb 2 22:47:36.958: INFO: Created: latency-svc-b2xmf Feb 2 22:47:37.454: INFO: Got endpoints: latency-svc-b2xmf [1.439978639s] Feb 2 22:47:37.560: INFO: Created: latency-svc-c8wsj Feb 2 22:47:37.591: INFO: Got endpoints: latency-svc-c8wsj [1.350273616s] Feb 2 22:47:37.643: INFO: Created: latency-svc-kmfpz Feb 2 22:47:37.708: INFO: Got endpoints: latency-svc-kmfpz [1.45555059s] Feb 2 22:47:37.715: INFO: Created: latency-svc-pnx7l Feb 2 22:47:37.740: INFO: Got endpoints: latency-svc-pnx7l [1.432783824s] Feb 2 22:47:37.771: INFO: Created: latency-svc-jhnhq Feb 2 22:47:37.787: INFO: Got endpoints: latency-svc-jhnhq [1.397307309s] Feb 2 22:47:37.897: INFO: Created: latency-svc-k7sbx Feb 2 22:47:37.907: INFO: Got endpoints: latency-svc-k7sbx [1.479475508s] Feb 2 22:47:37.963: INFO: Created: latency-svc-mdt6d Feb 2 22:47:37.972: INFO: Got endpoints: latency-svc-mdt6d [1.514167006s] Feb 2 22:47:38.037: INFO: Created: latency-svc-97pb6 Feb 2 22:47:38.064: INFO: Got endpoints: latency-svc-97pb6 [1.518326912s] Feb 2 22:47:38.065: INFO: Created: latency-svc-6rhh9 Feb 2 22:47:38.112: INFO: Got endpoints: latency-svc-6rhh9 [1.540618523s] Feb 2 22:47:38.193: INFO: Created: latency-svc-9mlz7 Feb 2 22:47:38.215: INFO: Created: latency-svc-9kjr4 Feb 2 22:47:38.215: INFO: Got endpoints: latency-svc-9mlz7 [1.583366585s] Feb 2 22:47:38.223: INFO: Got endpoints: latency-svc-9kjr4 [1.52150085s] Feb 2 22:47:38.239: INFO: Created: latency-svc-f6mdl Feb 2 22:47:38.247: INFO: Got endpoints: latency-svc-f6mdl [1.509117616s] Feb 2 22:47:38.273: INFO: Created: latency-svc-rtcqk Feb 2 22:47:38.354: INFO: Got endpoints: latency-svc-rtcqk [1.573165652s] Feb 2 22:47:38.356: INFO: Created: latency-svc-82tjr Feb 2 22:47:38.395: INFO: Got endpoints: latency-svc-82tjr [1.536110958s] Feb 2 22:47:38.425: INFO: Created: latency-svc-xh287 Feb 2 22:47:38.453: INFO: Got endpoints: latency-svc-xh287 [1.543013284s] Feb 2 22:47:38.503: INFO: Created: latency-svc-pkdhh Feb 2 22:47:38.544: INFO: Got endpoints: latency-svc-pkdhh [1.090436045s] Feb 2 22:47:38.545: INFO: Created: latency-svc-vfkxs Feb 2 22:47:38.574: INFO: Got endpoints: latency-svc-vfkxs [982.953088ms] Feb 2 22:47:38.641: INFO: Created: latency-svc-9xgfk Feb 2 22:47:38.670: INFO: Got endpoints: latency-svc-9xgfk [961.198249ms] Feb 2 22:47:38.670: INFO: Created: latency-svc-2z7r7 Feb 2 22:47:38.705: INFO: Got endpoints: latency-svc-2z7r7 [965.1569ms] Feb 2 22:47:38.791: INFO: Created: latency-svc-9zv4t Feb 2 22:47:38.796: INFO: Got endpoints: latency-svc-9zv4t [1.008828461s] Feb 2 22:47:38.833: INFO: Created: latency-svc-v8s57 Feb 2 22:47:38.842: INFO: Got endpoints: latency-svc-v8s57 [935.339738ms] Feb 2 22:47:38.856: INFO: Created: latency-svc-crngj Feb 2 22:47:38.864: INFO: Got endpoints: latency-svc-crngj [892.457362ms] Feb 2 22:47:38.947: INFO: Created: latency-svc-4cf8x Feb 2 22:47:38.982: INFO: Got endpoints: latency-svc-4cf8x [918.282671ms] Feb 2 22:47:38.983: INFO: Created: latency-svc-l9j87 Feb 2 22:47:39.013: INFO: Got endpoints: latency-svc-l9j87 [901.270016ms] Feb 2 22:47:39.042: INFO: Created: latency-svc-m7gjb Feb 2 22:47:39.078: INFO: Got endpoints: latency-svc-m7gjb [862.707861ms] Feb 2 22:47:39.113: INFO: Created: latency-svc-pn4t6 Feb 2 22:47:39.123: INFO: Got endpoints: latency-svc-pn4t6 [899.850055ms] Feb 2 22:47:39.143: INFO: Created: latency-svc-ds4kv Feb 2 22:47:39.154: INFO: Got endpoints: latency-svc-ds4kv [906.666892ms] Feb 2 22:47:39.212: INFO: Created: latency-svc-n52fp Feb 2 22:47:39.236: INFO: Got endpoints: latency-svc-n52fp [882.0287ms] Feb 2 22:47:39.236: INFO: Created: latency-svc-tfgdg Feb 2 22:47:39.275: INFO: Got endpoints: latency-svc-tfgdg [880.263534ms] Feb 2 22:47:39.348: INFO: Created: latency-svc-sh4bp Feb 2 22:47:39.377: INFO: Got endpoints: latency-svc-sh4bp [924.020979ms] Feb 2 22:47:39.403: INFO: Created: latency-svc-dskdc Feb 2 22:47:39.416: INFO: Got endpoints: latency-svc-dskdc [871.712874ms] Feb 2 22:47:39.487: INFO: Created: latency-svc-z5qrt Feb 2 22:47:39.509: INFO: Created: latency-svc-q2vjk Feb 2 22:47:39.510: INFO: Got endpoints: latency-svc-z5qrt [935.606467ms] Feb 2 22:47:39.539: INFO: Got endpoints: latency-svc-q2vjk [869.169483ms] Feb 2 22:47:39.570: INFO: Created: latency-svc-pz5ff Feb 2 22:47:39.584: INFO: Got endpoints: latency-svc-pz5ff [878.487856ms] Feb 2 22:47:39.629: INFO: Created: latency-svc-mvlgt Feb 2 22:47:39.648: INFO: Got endpoints: latency-svc-mvlgt [851.753426ms] Feb 2 22:47:39.691: INFO: Created: latency-svc-svtkf Feb 2 22:47:39.726: INFO: Got endpoints: latency-svc-svtkf [883.673516ms] Feb 2 22:47:39.781: INFO: Created: latency-svc-6lv5k Feb 2 22:47:39.816: INFO: Got endpoints: latency-svc-6lv5k [951.585778ms] Feb 2 22:47:39.916: INFO: Created: latency-svc-fffd9 Feb 2 22:47:39.947: INFO: Got endpoints: latency-svc-fffd9 [964.922796ms] Feb 2 22:47:39.948: INFO: Created: latency-svc-m6r26 Feb 2 22:47:39.973: INFO: Got endpoints: latency-svc-m6r26 [959.304001ms] Feb 2 22:47:40.003: INFO: Created: latency-svc-8fb44 Feb 2 22:47:40.016: INFO: Got endpoints: latency-svc-8fb44 [938.218826ms] Feb 2 22:47:40.055: INFO: Created: latency-svc-l9dv6 Feb 2 22:47:40.074: INFO: Got endpoints: latency-svc-l9dv6 [950.459138ms] Feb 2 22:47:40.074: INFO: Created: latency-svc-xtmfs Feb 2 22:47:40.103: INFO: Got endpoints: latency-svc-xtmfs [949.172067ms] Feb 2 22:47:40.128: INFO: Created: latency-svc-8dm2z Feb 2 22:47:40.141: INFO: Got endpoints: latency-svc-8dm2z [904.886847ms] Feb 2 22:47:40.185: INFO: Created: latency-svc-ddx6q Feb 2 22:47:40.206: INFO: Created: latency-svc-5z42q Feb 2 22:47:40.207: INFO: Got endpoints: latency-svc-ddx6q [931.032582ms] Feb 2 22:47:40.230: INFO: Got endpoints: latency-svc-5z42q [852.978754ms] Feb 2 22:47:40.254: INFO: Created: latency-svc-nz289 Feb 2 22:47:40.267: INFO: Got endpoints: latency-svc-nz289 [850.445958ms] Feb 2 22:47:40.353: INFO: Created: latency-svc-xkm4x Feb 2 22:47:40.405: INFO: Created: latency-svc-gf8n5 Feb 2 22:47:40.405: INFO: Got endpoints: latency-svc-xkm4x [895.352913ms] Feb 2 22:47:40.423: INFO: Got endpoints: latency-svc-gf8n5 [884.377245ms] Feb 2 22:47:40.441: INFO: Created: latency-svc-hszth Feb 2 22:47:40.473: INFO: Got endpoints: latency-svc-hszth [889.403481ms] Feb 2 22:47:40.487: INFO: Created: latency-svc-j4pd6 Feb 2 22:47:40.501: INFO: Got endpoints: latency-svc-j4pd6 [853.549457ms] Feb 2 22:47:40.518: INFO: Created: latency-svc-llcfw Feb 2 22:47:40.531: INFO: Got endpoints: latency-svc-llcfw [805.164091ms] Feb 2 22:47:40.554: INFO: Created: latency-svc-qh7zk Feb 2 22:47:40.567: INFO: Got endpoints: latency-svc-qh7zk [751.195601ms] Feb 2 22:47:40.605: INFO: Created: latency-svc-k8zjc Feb 2 22:47:40.614: INFO: Got endpoints: latency-svc-k8zjc [666.610084ms] Feb 2 22:47:40.644: INFO: Created: latency-svc-r7tps Feb 2 22:47:40.657: INFO: Got endpoints: latency-svc-r7tps [683.981329ms] Feb 2 22:47:40.674: INFO: Created: latency-svc-l5ntc Feb 2 22:47:40.686: INFO: Got endpoints: latency-svc-l5ntc [669.905915ms] Feb 2 22:47:40.703: INFO: Created: latency-svc-9n92v Feb 2 22:47:40.755: INFO: Got endpoints: latency-svc-9n92v [681.254069ms] Feb 2 22:47:40.771: INFO: Created: latency-svc-xnzfd Feb 2 22:47:40.788: INFO: Got endpoints: latency-svc-xnzfd [684.589177ms] Feb 2 22:47:40.837: INFO: Created: latency-svc-nbzz7 Feb 2 22:47:40.848: INFO: Got endpoints: latency-svc-nbzz7 [707.692602ms] Feb 2 22:47:40.887: INFO: Created: latency-svc-vpk24 Feb 2 22:47:40.908: INFO: Got endpoints: latency-svc-vpk24 [701.304023ms] Feb 2 22:47:40.909: INFO: Created: latency-svc-xjz68 Feb 2 22:47:40.956: INFO: Got endpoints: latency-svc-xjz68 [725.212536ms] Feb 2 22:47:40.980: INFO: Created: latency-svc-q7nmn Feb 2 22:47:41.024: INFO: Got endpoints: latency-svc-q7nmn [757.140486ms] Feb 2 22:47:41.045: INFO: Created: latency-svc-v7hg4 Feb 2 22:47:41.058: INFO: Got endpoints: latency-svc-v7hg4 [653.136585ms] Feb 2 22:47:41.081: INFO: Created: latency-svc-djkrl Feb 2 22:47:41.094: INFO: Got endpoints: latency-svc-djkrl [670.845524ms] Feb 2 22:47:41.118: INFO: Created: latency-svc-z75bt Feb 2 22:47:41.180: INFO: Got endpoints: latency-svc-z75bt [706.800927ms] Feb 2 22:47:41.203: INFO: Created: latency-svc-2ns24 Feb 2 22:47:41.226: INFO: Got endpoints: latency-svc-2ns24 [724.174207ms] Feb 2 22:47:41.250: INFO: Created: latency-svc-hzwsp Feb 2 22:47:41.274: INFO: Got endpoints: latency-svc-hzwsp [742.081261ms] Feb 2 22:47:41.324: INFO: Created: latency-svc-fc2v8 Feb 2 22:47:41.333: INFO: Got endpoints: latency-svc-fc2v8 [765.389745ms] Feb 2 22:47:41.371: INFO: Created: latency-svc-g8thz Feb 2 22:47:41.387: INFO: Got endpoints: latency-svc-g8thz [773.00122ms] Feb 2 22:47:41.407: INFO: Created: latency-svc-zlxhr Feb 2 22:47:41.462: INFO: Got endpoints: latency-svc-zlxhr [804.934934ms] Feb 2 22:47:41.462: INFO: Created: latency-svc-mxgf8 Feb 2 22:47:41.466: INFO: Got endpoints: latency-svc-mxgf8 [780.317351ms] Feb 2 22:47:41.490: INFO: Created: latency-svc-84jmd Feb 2 22:47:41.502: INFO: Got endpoints: latency-svc-84jmd [746.917563ms] Feb 2 22:47:41.520: INFO: Created: latency-svc-nhcg2 Feb 2 22:47:41.532: INFO: Got endpoints: latency-svc-nhcg2 [743.760084ms] Feb 2 22:47:41.551: INFO: Created: latency-svc-8h8wc Feb 2 22:47:41.587: INFO: Got endpoints: latency-svc-8h8wc [738.545735ms] Feb 2 22:47:41.611: INFO: Created: latency-svc-hx2qj Feb 2 22:47:41.628: INFO: Got endpoints: latency-svc-hx2qj [720.086665ms] Feb 2 22:47:41.646: INFO: Created: latency-svc-bzmt4 Feb 2 22:47:41.719: INFO: Got endpoints: latency-svc-bzmt4 [763.142852ms] Feb 2 22:47:41.719: INFO: Latencies: [56.731631ms 155.49329ms 234.825956ms 287.680109ms 408.251237ms 442.057167ms 520.436234ms 567.798261ms 623.826239ms 653.136585ms 666.610084ms 669.905915ms 670.845524ms 681.254069ms 683.981329ms 684.589177ms 688.638934ms 701.304023ms 706.800927ms 707.692602ms 720.086665ms 724.174207ms 725.212536ms 728.477958ms 738.545735ms 742.081261ms 743.760084ms 746.917563ms 751.195601ms 757.140486ms 761.294128ms 763.142852ms 765.389745ms 773.00122ms 776.962101ms 780.317351ms 795.692249ms 797.90054ms 799.849881ms 804.934934ms 805.164091ms 807.098886ms 810.122065ms 812.674234ms 816.001883ms 824.587533ms 825.395569ms 829.159678ms 839.688249ms 850.445958ms 851.429863ms 851.753426ms 852.978754ms 853.549457ms 858.580252ms 860.025414ms 861.694442ms 862.707861ms 863.796131ms 865.355037ms 866.365527ms 866.523218ms 867.889163ms 868.603751ms 869.169483ms 871.712874ms 872.542968ms 874.295406ms 875.457162ms 878.487856ms 880.263534ms 882.0287ms 883.673516ms 884.280575ms 884.377245ms 886.894894ms 889.403481ms 892.457362ms 892.898614ms 893.395545ms 893.565788ms 893.919211ms 895.352913ms 895.54258ms 897.916682ms 899.850055ms 901.270016ms 902.078832ms 904.886847ms 906.666892ms 907.091948ms 908.067873ms 916.718769ms 918.282671ms 922.118759ms 922.437852ms 923.365269ms 923.540985ms 924.020979ms 924.054549ms 924.655002ms 926.643353ms 927.106622ms 931.032582ms 934.425289ms 934.822323ms 935.339738ms 935.606467ms 938.218826ms 941.496376ms 944.757137ms 944.933199ms 946.228045ms 946.311121ms 946.678225ms 947.62602ms 949.172067ms 950.459138ms 950.780336ms 951.585778ms 953.253878ms 957.564408ms 958.342599ms 958.915735ms 959.304001ms 960.046487ms 961.198249ms 964.463694ms 964.922796ms 965.1569ms 974.508226ms 976.272834ms 976.82087ms 980.994105ms 982.953088ms 991.45939ms 992.195985ms 992.65475ms 993.055739ms 994.907097ms 999.236472ms 1.000355734s 1.000495351s 1.003336707s 1.003754636s 1.004082844s 1.006490904s 1.008828461s 1.009425314s 1.023585629s 1.03583666s 1.036681678s 1.04046224s 1.043126119s 1.045321034s 1.046547061s 1.046971675s 1.047849826s 1.050591012s 1.050823513s 1.05887823s 1.059407697s 1.06265133s 1.066477457s 1.066830489s 1.068687119s 1.072026615s 1.077434423s 1.084452825s 1.090436045s 1.101577155s 1.105563325s 1.108801559s 1.110588348s 1.113329968s 1.126451202s 1.14421809s 1.160934381s 1.16484077s 1.164955222s 1.174335846s 1.177027715s 1.198528899s 1.202721924s 1.215592562s 1.350273616s 1.397307309s 1.432783824s 1.439978639s 1.45555059s 1.479475508s 1.509117616s 1.514167006s 1.518326912s 1.52150085s 1.536110958s 1.540618523s 1.543013284s 1.573165652s 1.583366585s] Feb 2 22:47:41.719: INFO: 50 %ile: 924.655002ms Feb 2 22:47:41.719: INFO: 90 %ile: 1.174335846s Feb 2 22:47:41.719: INFO: 99 %ile: 1.573165652s Feb 2 22:47:41.719: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:41.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4988" for this suite. • [SLOW TEST:17.370 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":309,"completed":35,"skipped":731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:41.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 2 22:47:41.925: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2044 c57cebfc-5fc2-4021-923c-4f537cfb63fa 4170826 0 2021-02-02 22:47:41 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-02 22:47:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 22:47:41.925: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2044 c57cebfc-5fc2-4021-923c-4f537cfb63fa 4170827 0 2021-02-02 22:47:41 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-02 22:47:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 2 22:47:41.985: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2044 c57cebfc-5fc2-4021-923c-4f537cfb63fa 4170828 0 2021-02-02 22:47:41 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-02 22:47:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 22:47:41.985: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2044 c57cebfc-5fc2-4021-923c-4f537cfb63fa 4170829 0 2021-02-02 22:47:41 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-02 22:47:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:41.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2044" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":309,"completed":36,"skipped":758,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:42.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Feb 2 22:47:42.079: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:47:49.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-994" for this suite. • [SLOW TEST:7.905 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":309,"completed":37,"skipped":771,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:47:49.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 2 22:47:50.002: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 2 22:48:03.571: INFO: >>> kubeConfig: /root/.kube/config Feb 2 22:48:07.178: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:21.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7543" for this suite. • [SLOW TEST:31.252 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":309,"completed":38,"skipped":778,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:21.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:21.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5020" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":309,"completed":39,"skipped":792,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:21.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-27fb9391-f901-431f-9d10-8e3b6dddd739 STEP: Creating a pod to test consume secrets Feb 2 22:48:21.436: INFO: Waiting up to 5m0s for pod "pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0" in namespace "secrets-6204" to be "Succeeded or Failed" Feb 2 22:48:21.458: INFO: Pod "pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.587007ms Feb 2 22:48:23.461: INFO: Pod "pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025190438s Feb 2 22:48:25.465: INFO: Pod "pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029329723s STEP: Saw pod success Feb 2 22:48:25.465: INFO: Pod "pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0" satisfied condition "Succeeded or Failed" Feb 2 22:48:25.468: INFO: Trying to get logs from node leguer-worker pod pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0 container secret-volume-test: STEP: delete the pod Feb 2 22:48:25.588: INFO: Waiting for pod pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0 to disappear Feb 2 22:48:25.591: INFO: Pod pod-secrets-a99ac4ca-b498-4506-92ba-f3b4b85e9fe0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:25.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6204" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":40,"skipped":796,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:25.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-fef771b5-53db-4c99-80da-bac1e7938f74 STEP: Creating a pod to test consume configMaps Feb 2 22:48:25.719: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e" in namespace "projected-5041" to be "Succeeded or Failed" Feb 2 22:48:25.735: INFO: Pod "pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.526065ms Feb 2 22:48:27.766: INFO: Pod "pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047422314s Feb 2 22:48:29.770: INFO: Pod "pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051447475s STEP: Saw pod success Feb 2 22:48:29.770: INFO: Pod "pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e" satisfied condition "Succeeded or Failed" Feb 2 22:48:29.773: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e container agnhost-container: STEP: delete the pod Feb 2 22:48:29.796: INFO: Waiting for pod pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e to disappear Feb 2 22:48:29.813: INFO: Pod pod-projected-configmaps-1a9afcf8-1baa-44d2-83c3-0f407306060e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:29.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5041" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":41,"skipped":804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:29.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-93869adc-f17d-496e-8f10-f8eb2f3007d9 STEP: Creating a pod to test consume configMaps Feb 2 22:48:29.912: INFO: Waiting up to 5m0s for pod "pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7" in namespace "configmap-8370" to be "Succeeded or Failed" Feb 2 22:48:29.928: INFO: Pod "pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.821315ms Feb 2 22:48:31.932: INFO: Pod "pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020217263s Feb 2 22:48:33.937: INFO: Pod "pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025366109s STEP: Saw pod success Feb 2 22:48:33.937: INFO: Pod "pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7" satisfied condition "Succeeded or Failed" Feb 2 22:48:33.941: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7 container agnhost-container: STEP: delete the pod Feb 2 22:48:33.996: INFO: Waiting for pod pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7 to disappear Feb 2 22:48:34.005: INFO: Pod pod-configmaps-6912e60c-5ff6-48ec-8e16-3f8dd9dfebb7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:34.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8370" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":42,"skipped":855,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:34.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:38.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8072" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":309,"completed":43,"skipped":859,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:38.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:48:38.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054" in namespace "projected-2435" to be "Succeeded or Failed" Feb 2 22:48:38.634: INFO: Pod "downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054": Phase="Pending", Reason="", readiness=false. Elapsed: 26.407564ms Feb 2 22:48:40.644: INFO: Pod "downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035609751s Feb 2 22:48:42.649: INFO: Pod "downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040999253s STEP: Saw pod success Feb 2 22:48:42.649: INFO: Pod "downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054" satisfied condition "Succeeded or Failed" Feb 2 22:48:42.653: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054 container client-container: STEP: delete the pod Feb 2 22:48:42.717: INFO: Waiting for pod downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054 to disappear Feb 2 22:48:42.754: INFO: Pod downwardapi-volume-50dbd6ec-9048-4f51-983b-f791e15ff054 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:42.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2435" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":309,"completed":44,"skipped":861,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:42.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-27f97c43-31ed-4828-99c3-86d4340491ea STEP: Creating a pod to test consume configMaps Feb 2 22:48:42.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4" in namespace "configmap-9937" to be "Succeeded or Failed" Feb 2 22:48:42.878: INFO: Pod "pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049041ms Feb 2 22:48:44.897: INFO: Pod "pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026816753s Feb 2 22:48:46.909: INFO: Pod "pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038498685s Feb 2 22:48:48.913: INFO: Pod "pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042687536s STEP: Saw pod success Feb 2 22:48:48.913: INFO: Pod "pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4" satisfied condition "Succeeded or Failed" Feb 2 22:48:48.916: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4 container agnhost-container: STEP: delete the pod Feb 2 22:48:48.975: INFO: Waiting for pod pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4 to disappear Feb 2 22:48:49.017: INFO: Pod pod-configmaps-f4fad781-a60d-4b15-a832-8fb783bed9c4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:49.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9937" for this suite. • [SLOW TEST:6.326 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":45,"skipped":865,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:49.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-c2d37c1c-eb84-4082-ba98-44bf79bac184 STEP: Creating a pod to test consume configMaps Feb 2 22:48:49.218: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45" in namespace "configmap-9456" to be "Succeeded or Failed" Feb 2 22:48:49.221: INFO: Pod "pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45": Phase="Pending", Reason="", readiness=false. Elapsed: 3.026477ms Feb 2 22:48:51.226: INFO: Pod "pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008500062s Feb 2 22:48:53.230: INFO: Pod "pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012762718s STEP: Saw pod success Feb 2 22:48:53.230: INFO: Pod "pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45" satisfied condition "Succeeded or Failed" Feb 2 22:48:53.234: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45 container agnhost-container: STEP: delete the pod Feb 2 22:48:53.498: INFO: Waiting for pod pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45 to disappear Feb 2 22:48:53.508: INFO: Pod pod-configmaps-bf50165a-afae-494c-ba59-c1fa80089c45 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:53.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9456" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":46,"skipped":865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:53.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 22:48:53.724: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-d90c2083-1c44-4771-bf56-6f672a5004d5" in namespace "security-context-test-407" to be "Succeeded or Failed" Feb 2 22:48:53.736: INFO: Pod "alpine-nnp-false-d90c2083-1c44-4771-bf56-6f672a5004d5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.265089ms Feb 2 22:48:55.742: INFO: Pod "alpine-nnp-false-d90c2083-1c44-4771-bf56-6f672a5004d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018306957s Feb 2 22:48:57.772: INFO: Pod "alpine-nnp-false-d90c2083-1c44-4771-bf56-6f672a5004d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048285617s Feb 2 22:48:57.772: INFO: Pod "alpine-nnp-false-d90c2083-1c44-4771-bf56-6f672a5004d5" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:57.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-407" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":47,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:57.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:48:57.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9004" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":309,"completed":48,"skipped":920,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:48:58.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Feb 2 22:49:02.820: INFO: Successfully updated pod "annotationupdatef35cc128-2657-4baf-a06d-91663facde0a" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:49:06.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2332" for this suite. • [SLOW TEST:8.916 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":49,"skipped":922,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:49:06.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:49:07.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da" in namespace "downward-api-2203" to be "Succeeded or Failed" Feb 2 22:49:07.065: INFO: Pod "downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da": Phase="Pending", Reason="", readiness=false. Elapsed: 5.623749ms Feb 2 22:49:09.102: INFO: Pod "downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042874773s Feb 2 22:49:11.107: INFO: Pod "downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047630268s STEP: Saw pod success Feb 2 22:49:11.107: INFO: Pod "downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da" satisfied condition "Succeeded or Failed" Feb 2 22:49:11.119: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da container client-container: STEP: delete the pod Feb 2 22:49:11.173: INFO: Waiting for pod downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da to disappear Feb 2 22:49:11.179: INFO: Pod downwardapi-volume-3757ac15-b225-44e0-89cd-22a66ba150da no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:49:11.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2203" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":50,"skipped":924,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:49:11.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 2 22:49:21.348: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:21.400: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:23.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:23.404: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:25.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:25.406: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:27.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:27.403: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:29.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:29.411: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:31.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:31.404: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:33.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:33.407: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:35.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:35.405: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:37.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:37.427: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:39.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:39.414: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:41.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:41.404: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:43.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:43.405: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:45.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:45.404: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:47.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:47.405: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:49.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:49.404: INFO: Pod pod-with-poststart-exec-hook still exists Feb 2 22:49:51.400: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 2 22:49:51.404: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:49:51.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3139" for this suite. • [SLOW TEST:40.227 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":309,"completed":51,"skipped":930,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:49:51.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4240 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4240 STEP: Creating statefulset with conflicting port in namespace statefulset-4240 STEP: Waiting until pod test-pod will start running in namespace statefulset-4240 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4240 Feb 2 22:49:57.994: INFO: Observed stateful pod in namespace: statefulset-4240, name: ss-0, uid: dd5d3dab-d520-4080-a4ad-aec49311ca9b, status phase: Pending. Waiting for statefulset controller to delete. Feb 2 22:49:58.079: INFO: Observed stateful pod in namespace: statefulset-4240, name: ss-0, uid: dd5d3dab-d520-4080-a4ad-aec49311ca9b, status phase: Failed. Waiting for statefulset controller to delete. Feb 2 22:49:58.187: INFO: Observed stateful pod in namespace: statefulset-4240, name: ss-0, uid: dd5d3dab-d520-4080-a4ad-aec49311ca9b, status phase: Failed. Waiting for statefulset controller to delete. Feb 2 22:49:58.264: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4240 STEP: Removing pod with conflicting port in namespace statefulset-4240 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4240 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 2 22:50:04.373: INFO: Deleting all statefulset in ns statefulset-4240 Feb 2 22:50:04.376: INFO: Scaling statefulset ss to 0 Feb 2 22:50:54.393: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 22:50:54.396: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:50:54.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4240" for this suite. • [SLOW TEST:63.006 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":309,"completed":52,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:50:54.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:50:54.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7173" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":53,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:50:54.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 2 22:50:58.946: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:50:58.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7749" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":54,"skipped":1011,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:50:59.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 2 22:50:59.134: INFO: Waiting up to 5m0s for pod "pod-86568bb4-1407-4baa-8737-ed87171bab90" in namespace "emptydir-8130" to be "Succeeded or Failed" Feb 2 22:50:59.203: INFO: Pod "pod-86568bb4-1407-4baa-8737-ed87171bab90": Phase="Pending", Reason="", readiness=false. Elapsed: 68.436481ms Feb 2 22:51:01.359: INFO: Pod "pod-86568bb4-1407-4baa-8737-ed87171bab90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224230363s Feb 2 22:51:03.779: INFO: Pod "pod-86568bb4-1407-4baa-8737-ed87171bab90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644110887s Feb 2 22:51:05.783: INFO: Pod "pod-86568bb4-1407-4baa-8737-ed87171bab90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.648533318s STEP: Saw pod success Feb 2 22:51:05.783: INFO: Pod "pod-86568bb4-1407-4baa-8737-ed87171bab90" satisfied condition "Succeeded or Failed" Feb 2 22:51:05.786: INFO: Trying to get logs from node leguer-worker pod pod-86568bb4-1407-4baa-8737-ed87171bab90 container test-container: STEP: delete the pod Feb 2 22:51:05.830: INFO: Waiting for pod pod-86568bb4-1407-4baa-8737-ed87171bab90 to disappear Feb 2 22:51:05.891: INFO: Pod pod-86568bb4-1407-4baa-8737-ed87171bab90 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:05.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8130" for this suite. • [SLOW TEST:6.858 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":55,"skipped":1025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:05.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 2 22:51:06.029: INFO: Waiting up to 5m0s for pod "pod-ae1590ab-17ed-460e-8d94-86a7a48836c0" in namespace "emptydir-3474" to be "Succeeded or Failed" Feb 2 22:51:06.033: INFO: Pod "pod-ae1590ab-17ed-460e-8d94-86a7a48836c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.812267ms Feb 2 22:51:08.037: INFO: Pod "pod-ae1590ab-17ed-460e-8d94-86a7a48836c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812418s Feb 2 22:51:10.042: INFO: Pod "pod-ae1590ab-17ed-460e-8d94-86a7a48836c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012237208s STEP: Saw pod success Feb 2 22:51:10.042: INFO: Pod "pod-ae1590ab-17ed-460e-8d94-86a7a48836c0" satisfied condition "Succeeded or Failed" Feb 2 22:51:10.045: INFO: Trying to get logs from node leguer-worker pod pod-ae1590ab-17ed-460e-8d94-86a7a48836c0 container test-container: STEP: delete the pod Feb 2 22:51:10.149: INFO: Waiting for pod pod-ae1590ab-17ed-460e-8d94-86a7a48836c0 to disappear Feb 2 22:51:10.152: INFO: Pod pod-ae1590ab-17ed-460e-8d94-86a7a48836c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:10.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3474" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":56,"skipped":1075,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:10.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:51:10.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0" in namespace "projected-557" to be "Succeeded or Failed" Feb 2 22:51:10.244: INFO: Pod "downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.454818ms Feb 2 22:51:12.248: INFO: Pod "downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014691362s Feb 2 22:51:14.253: INFO: Pod "downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.020389183s Feb 2 22:51:16.262: INFO: Pod "downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028912554s STEP: Saw pod success Feb 2 22:51:16.262: INFO: Pod "downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0" satisfied condition "Succeeded or Failed" Feb 2 22:51:16.265: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0 container client-container: STEP: delete the pod Feb 2 22:51:16.307: INFO: Waiting for pod downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0 to disappear Feb 2 22:51:16.334: INFO: Pod downwardapi-volume-b58ae575-efe4-4f32-bbc5-570fe23282f0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:16.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-557" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":57,"skipped":1088,"failed":0} SSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:16.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3224" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":309,"completed":58,"skipped":1093,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:16.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name secret-emptykey-test-8dafa6e4-7213-4175-a165-a4ef7ddcd960 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:16.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4612" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":309,"completed":59,"skipped":1114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:16.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 22:51:16.950: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9" in namespace "projected-5682" to be "Succeeded or Failed" Feb 2 22:51:16.977: INFO: Pod "downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.666868ms Feb 2 22:51:18.983: INFO: Pod "downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03248187s Feb 2 22:51:20.987: INFO: Pod "downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036698798s STEP: Saw pod success Feb 2 22:51:20.987: INFO: Pod "downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9" satisfied condition "Succeeded or Failed" Feb 2 22:51:20.990: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9 container client-container: STEP: delete the pod Feb 2 22:51:21.070: INFO: Waiting for pod downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9 to disappear Feb 2 22:51:21.080: INFO: Pod downwardapi-volume-b1053dee-d51e-4573-9442-c2fc6a7491b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:21.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5682" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":60,"skipped":1157,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:21.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-17988f30-637a-44d6-9a63-18b3f5a8301e STEP: Creating a pod to test consume secrets Feb 2 22:51:21.195: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051" in namespace "projected-5235" to be "Succeeded or Failed" Feb 2 22:51:21.253: INFO: Pod "pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051": Phase="Pending", Reason="", readiness=false. Elapsed: 57.960315ms Feb 2 22:51:23.258: INFO: Pod "pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062493417s Feb 2 22:51:25.266: INFO: Pod "pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07105248s STEP: Saw pod success Feb 2 22:51:25.267: INFO: Pod "pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051" satisfied condition "Succeeded or Failed" Feb 2 22:51:25.269: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051 container projected-secret-volume-test: STEP: delete the pod Feb 2 22:51:25.297: INFO: Waiting for pod pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051 to disappear Feb 2 22:51:25.307: INFO: Pod pod-projected-secrets-cfda86ee-0926-42ee-857f-2f697d510051 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:25.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5235" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":61,"skipped":1162,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:25.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 22:51:26.179: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 22:51:28.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903086, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903086, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903086, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903086, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 22:51:31.219: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:51:31.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8784" for this suite. STEP: Destroying namespace "webhook-8784-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.155 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":309,"completed":62,"skipped":1191,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:51:31.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod test-webserver-1ef31b20-4084-4dc0-bfb0-e05f52d7b61d in namespace container-probe-1973 Feb 2 22:51:37.635: INFO: Started pod test-webserver-1ef31b20-4084-4dc0-bfb0-e05f52d7b61d in namespace container-probe-1973 STEP: checking the pod's current state and verifying that restartCount is present Feb 2 22:51:37.639: INFO: Initial restart count of pod test-webserver-1ef31b20-4084-4dc0-bfb0-e05f52d7b61d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:55:38.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1973" for this suite. • [SLOW TEST:246.864 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":63,"skipped":1205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:55:38.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 2 22:55:38.745: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 22:56:38.788: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:56:38.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Feb 2 22:56:42.942: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 22:57:01.272: INFO: pods created so far: [1 1 1] Feb 2 22:57:01.272: INFO: length of pods created so far: 3 Feb 2 22:57:51.281: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:57:58.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6375" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:57:58.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9163" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:140.103 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":309,"completed":64,"skipped":1248,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:57:58.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 2 22:57:58.566: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:58:50.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5899" for this suite. • [SLOW TEST:51.819 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":309,"completed":65,"skipped":1253,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:58:50.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod Feb 2 22:58:50.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 2 22:58:53.607: INFO: stderr: "" Feb 2 22:58:53.607: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Waiting for log generator to start. Feb 2 22:58:53.607: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 2 22:58:53.607: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3034" to be "running and ready, or succeeded" Feb 2 22:58:53.689: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 81.959847ms Feb 2 22:58:55.692: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085218455s Feb 2 22:58:57.697: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.089803498s Feb 2 22:58:57.697: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 2 22:58:57.697: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 2 22:58:57.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 logs logs-generator logs-generator' Feb 2 22:58:57.835: INFO: stderr: "" Feb 2 22:58:57.835: INFO: stdout: "I0202 22:58:56.386909 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/4rbm 416\nI0202 22:58:56.587120 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/4tt5 360\nI0202 22:58:56.787064 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/jbn 482\nI0202 22:58:56.987060 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/bvj 459\nI0202 22:58:57.187094 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/sl6 432\nI0202 22:58:57.387107 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/8th 342\nI0202 22:58:57.587068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/8ddg 569\nI0202 22:58:57.787041 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/cjf 523\n" STEP: limiting log lines Feb 2 22:58:57.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 logs logs-generator logs-generator --tail=1' Feb 2 22:58:57.941: INFO: stderr: "" Feb 2 22:58:57.941: INFO: stdout: "I0202 22:58:57.787041 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/cjf 523\n" Feb 2 22:58:57.941: INFO: got output "I0202 22:58:57.787041 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/cjf 523\n" STEP: limiting log bytes Feb 2 22:58:57.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 logs logs-generator logs-generator --limit-bytes=1' Feb 2 22:58:58.044: INFO: stderr: "" Feb 2 22:58:58.044: INFO: stdout: "I" Feb 2 22:58:58.044: INFO: got output "I" STEP: exposing timestamps Feb 2 22:58:58.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 logs logs-generator logs-generator --tail=1 --timestamps' Feb 2 22:58:58.150: INFO: stderr: "" Feb 2 22:58:58.150: INFO: stdout: "2021-02-02T22:58:57.987311591Z I0202 22:58:57.987119 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/vvl 555\n" Feb 2 22:58:58.150: INFO: got output "2021-02-02T22:58:57.987311591Z I0202 22:58:57.987119 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/vvl 555\n" STEP: restricting to a time range Feb 2 22:59:00.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 logs logs-generator logs-generator --since=1s' Feb 2 22:59:00.773: INFO: stderr: "" Feb 2 22:59:00.773: INFO: stdout: "I0202 22:58:59.787069 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/dfcm 211\nI0202 22:58:59.987092 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/z5kg 249\nI0202 22:59:00.187072 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/bj42 500\nI0202 22:59:00.387098 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/js6r 436\nI0202 22:59:00.587088 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/8lpd 381\n" Feb 2 22:59:00.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 logs logs-generator logs-generator --since=24h' Feb 2 22:59:00.893: INFO: stderr: "" Feb 2 22:59:00.893: INFO: stdout: "I0202 22:58:56.386909 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/4rbm 416\nI0202 22:58:56.587120 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/4tt5 360\nI0202 22:58:56.787064 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/jbn 482\nI0202 22:58:56.987060 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/bvj 459\nI0202 22:58:57.187094 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/sl6 432\nI0202 22:58:57.387107 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/8th 342\nI0202 22:58:57.587068 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/8ddg 569\nI0202 22:58:57.787041 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/cjf 523\nI0202 22:58:57.987119 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/vvl 555\nI0202 22:58:58.187103 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/pc9c 525\nI0202 22:58:58.387082 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/hbcd 307\nI0202 22:58:58.587079 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/8jsk 249\nI0202 22:58:58.787064 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/7sh 363\nI0202 22:58:58.987114 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/5vxn 297\nI0202 22:58:59.187017 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/jtlv 486\nI0202 22:58:59.387081 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/dm6j 387\nI0202 22:58:59.587094 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/fghf 584\nI0202 22:58:59.787069 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/dfcm 211\nI0202 22:58:59.987092 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/z5kg 249\nI0202 22:59:00.187072 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/bj42 500\nI0202 22:59:00.387098 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/js6r 436\nI0202 22:59:00.587088 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/8lpd 381\nI0202 22:59:00.787073 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/6z6 258\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Feb 2 22:59:00.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3034 delete pod logs-generator' Feb 2 22:59:50.140: INFO: stderr: "" Feb 2 22:59:50.140: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 22:59:50.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3034" for this suite. • [SLOW TEST:59.893 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":309,"completed":66,"skipped":1274,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 22:59:50.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 22:59:50.850: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 22:59:52.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903590, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903590, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903590, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903590, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 22:59:55.896: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:08.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8507" for this suite. STEP: Destroying namespace "webhook-8507-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":309,"completed":67,"skipped":1288,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:08.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 2 23:00:08.281: INFO: Waiting up to 5m0s for pod "pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee" in namespace "emptydir-8987" to be "Succeeded or Failed" Feb 2 23:00:08.311: INFO: Pod "pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee": Phase="Pending", Reason="", readiness=false. Elapsed: 30.411193ms Feb 2 23:00:10.322: INFO: Pod "pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041297288s Feb 2 23:00:12.327: INFO: Pod "pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046338882s STEP: Saw pod success Feb 2 23:00:12.327: INFO: Pod "pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee" satisfied condition "Succeeded or Failed" Feb 2 23:00:12.331: INFO: Trying to get logs from node leguer-worker2 pod pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee container test-container: STEP: delete the pod Feb 2 23:00:12.484: INFO: Waiting for pod pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee to disappear Feb 2 23:00:12.520: INFO: Pod pod-0c879da2-2fe7-4fa8-aea9-ec092458cfee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:12.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8987" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":68,"skipped":1307,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:12.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Feb 2 23:00:17.176: INFO: Successfully updated pod "annotationupdate74cf7add-93ff-4df8-b018-e11200a2f4d2" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:21.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7848" for this suite. • [SLOW TEST:8.689 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":69,"skipped":1310,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:21.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Feb 2 23:00:21.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 create -f -' Feb 2 23:00:21.726: INFO: stderr: "" Feb 2 23:00:21.726: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 2 23:00:21.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 23:00:21.849: INFO: stderr: "" Feb 2 23:00:21.849: INFO: stdout: "update-demo-nautilus-6xqvx update-demo-nautilus-9m8sw " Feb 2 23:00:21.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods update-demo-nautilus-6xqvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 23:00:21.940: INFO: stderr: "" Feb 2 23:00:21.940: INFO: stdout: "" Feb 2 23:00:21.940: INFO: update-demo-nautilus-6xqvx is created but not running Feb 2 23:00:26.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 2 23:00:27.049: INFO: stderr: "" Feb 2 23:00:27.049: INFO: stdout: "update-demo-nautilus-6xqvx update-demo-nautilus-9m8sw " Feb 2 23:00:27.049: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods update-demo-nautilus-6xqvx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 23:00:27.267: INFO: stderr: "" Feb 2 23:00:27.267: INFO: stdout: "true" Feb 2 23:00:27.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods update-demo-nautilus-6xqvx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 23:00:27.375: INFO: stderr: "" Feb 2 23:00:27.375: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 23:00:27.375: INFO: validating pod update-demo-nautilus-6xqvx Feb 2 23:00:27.379: INFO: got data: { "image": "nautilus.jpg" } Feb 2 23:00:27.379: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 23:00:27.379: INFO: update-demo-nautilus-6xqvx is verified up and running Feb 2 23:00:27.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods update-demo-nautilus-9m8sw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 2 23:00:27.490: INFO: stderr: "" Feb 2 23:00:27.490: INFO: stdout: "true" Feb 2 23:00:27.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods update-demo-nautilus-9m8sw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 2 23:00:27.590: INFO: stderr: "" Feb 2 23:00:27.591: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 23:00:27.591: INFO: validating pod update-demo-nautilus-9m8sw Feb 2 23:00:27.595: INFO: got data: { "image": "nautilus.jpg" } Feb 2 23:00:27.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 23:00:27.595: INFO: update-demo-nautilus-9m8sw is verified up and running STEP: using delete to clean up resources Feb 2 23:00:27.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 delete --grace-period=0 --force -f -' Feb 2 23:00:27.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:00:27.718: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 2 23:00:27.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get rc,svc -l name=update-demo --no-headers' Feb 2 23:00:27.815: INFO: stderr: "No resources found in kubectl-7105 namespace.\n" Feb 2 23:00:27.815: INFO: stdout: "" Feb 2 23:00:27.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7105 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 23:00:28.096: INFO: stderr: "" Feb 2 23:00:28.096: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:28.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7105" for this suite. • [SLOW TEST:6.884 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":309,"completed":70,"skipped":1311,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:28.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in volume subpath Feb 2 23:00:28.222: INFO: Waiting up to 5m0s for pod "var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42" in namespace "var-expansion-845" to be "Succeeded or Failed" Feb 2 23:00:28.269: INFO: Pod "var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42": Phase="Pending", Reason="", readiness=false. Elapsed: 47.457346ms Feb 2 23:00:30.277: INFO: Pod "var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055438502s Feb 2 23:00:32.282: INFO: Pod "var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059774516s STEP: Saw pod success Feb 2 23:00:32.282: INFO: Pod "var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42" satisfied condition "Succeeded or Failed" Feb 2 23:00:32.284: INFO: Trying to get logs from node leguer-worker pod var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42 container dapi-container: STEP: delete the pod Feb 2 23:00:32.331: INFO: Waiting for pod var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42 to disappear Feb 2 23:00:32.337: INFO: Pod var-expansion-2aad4c8b-e9bd-4c97-8d7b-a1a75afecb42 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:32.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-845" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":309,"completed":71,"skipped":1333,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:32.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 2 23:00:32.603: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-4790 7834d3e4-aa47-4c05-b604-fad036b3df32 4174761 0 2021-02-02 23:00:32 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-02-02 23:00:32 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r66f6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r66f6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r66f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:00:32.606: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:00:34.611: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:00:36.611: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Feb 2 23:00:36.611: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4790 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:00:36.611: INFO: >>> kubeConfig: /root/.kube/config I0202 23:00:36.651889 7 log.go:181] (0xc00015f6b0) (0xc00308c960) Create stream I0202 23:00:36.651973 7 log.go:181] (0xc00015f6b0) (0xc00308c960) Stream added, broadcasting: 1 I0202 23:00:36.654789 7 log.go:181] (0xc00015f6b0) Reply frame received for 1 I0202 23:00:36.654838 7 log.go:181] (0xc00015f6b0) (0xc00308ca00) Create stream I0202 23:00:36.654854 7 log.go:181] (0xc00015f6b0) (0xc00308ca00) Stream added, broadcasting: 3 I0202 23:00:36.655895 7 log.go:181] (0xc00015f6b0) Reply frame received for 3 I0202 23:00:36.655942 7 log.go:181] (0xc00015f6b0) (0xc00308caa0) Create stream I0202 23:00:36.655964 7 log.go:181] (0xc00015f6b0) (0xc00308caa0) Stream added, broadcasting: 5 I0202 23:00:36.657051 7 log.go:181] (0xc00015f6b0) Reply frame received for 5 I0202 23:00:36.753034 7 log.go:181] (0xc00015f6b0) Data frame received for 3 I0202 23:00:36.753079 7 log.go:181] (0xc00308ca00) (3) Data frame handling I0202 23:00:36.753111 7 log.go:181] (0xc00308ca00) (3) Data frame sent I0202 23:00:36.755303 7 log.go:181] (0xc00015f6b0) Data frame received for 5 I0202 23:00:36.755347 7 log.go:181] (0xc00308caa0) (5) Data frame handling I0202 23:00:36.755392 7 log.go:181] (0xc00015f6b0) Data frame received for 3 I0202 23:00:36.755414 7 log.go:181] (0xc00308ca00) (3) Data frame handling I0202 23:00:36.756889 7 log.go:181] (0xc00015f6b0) Data frame received for 1 I0202 23:00:36.756930 7 log.go:181] (0xc00308c960) (1) Data frame handling I0202 23:00:36.756955 7 log.go:181] (0xc00308c960) (1) Data frame sent I0202 23:00:36.757028 7 log.go:181] (0xc00015f6b0) (0xc00308c960) Stream removed, broadcasting: 1 I0202 23:00:36.757200 7 log.go:181] (0xc00015f6b0) (0xc00308c960) Stream removed, broadcasting: 1 I0202 23:00:36.757220 7 log.go:181] (0xc00015f6b0) (0xc00308ca00) Stream removed, broadcasting: 3 I0202 23:00:36.757231 7 log.go:181] (0xc00015f6b0) (0xc00308caa0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 2 23:00:36.757: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4790 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:00:36.757: INFO: >>> kubeConfig: /root/.kube/config I0202 23:00:36.757400 7 log.go:181] (0xc00015f6b0) Go away received I0202 23:00:36.788933 7 log.go:181] (0xc0008171e0) (0xc007330280) Create stream I0202 23:00:36.788989 7 log.go:181] (0xc0008171e0) (0xc007330280) Stream added, broadcasting: 1 I0202 23:00:36.794022 7 log.go:181] (0xc0008171e0) Reply frame received for 1 I0202 23:00:36.794065 7 log.go:181] (0xc0008171e0) (0xc00308cb40) Create stream I0202 23:00:36.794088 7 log.go:181] (0xc0008171e0) (0xc00308cb40) Stream added, broadcasting: 3 I0202 23:00:36.795527 7 log.go:181] (0xc0008171e0) Reply frame received for 3 I0202 23:00:36.795568 7 log.go:181] (0xc0008171e0) (0xc002efdcc0) Create stream I0202 23:00:36.795580 7 log.go:181] (0xc0008171e0) (0xc002efdcc0) Stream added, broadcasting: 5 I0202 23:00:36.796794 7 log.go:181] (0xc0008171e0) Reply frame received for 5 I0202 23:00:36.872015 7 log.go:181] (0xc0008171e0) Data frame received for 3 I0202 23:00:36.872044 7 log.go:181] (0xc00308cb40) (3) Data frame handling I0202 23:00:36.872061 7 log.go:181] (0xc00308cb40) (3) Data frame sent I0202 23:00:36.873349 7 log.go:181] (0xc0008171e0) Data frame received for 5 I0202 23:00:36.873375 7 log.go:181] (0xc002efdcc0) (5) Data frame handling I0202 23:00:36.873405 7 log.go:181] (0xc0008171e0) Data frame received for 3 I0202 23:00:36.873414 7 log.go:181] (0xc00308cb40) (3) Data frame handling I0202 23:00:36.875212 7 log.go:181] (0xc0008171e0) Data frame received for 1 I0202 23:00:36.875240 7 log.go:181] (0xc007330280) (1) Data frame handling I0202 23:00:36.875256 7 log.go:181] (0xc007330280) (1) Data frame sent I0202 23:00:36.875274 7 log.go:181] (0xc0008171e0) (0xc007330280) Stream removed, broadcasting: 1 I0202 23:00:36.875296 7 log.go:181] (0xc0008171e0) Go away received I0202 23:00:36.875476 7 log.go:181] (0xc0008171e0) (0xc007330280) Stream removed, broadcasting: 1 I0202 23:00:36.875522 7 log.go:181] (0xc0008171e0) (0xc00308cb40) Stream removed, broadcasting: 3 I0202 23:00:36.875548 7 log.go:181] (0xc0008171e0) (0xc002efdcc0) Stream removed, broadcasting: 5 Feb 2 23:00:36.875: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:36.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4790" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":309,"completed":72,"skipped":1338,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:36.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 2 23:00:37.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7999 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Feb 2 23:00:37.180: INFO: stderr: "" Feb 2 23:00:37.180: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Feb 2 23:00:37.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7999 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Feb 2 23:00:37.670: INFO: stderr: "" Feb 2 23:00:37.670: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Feb 2 23:00:37.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7999 delete pods e2e-test-httpd-pod' Feb 2 23:00:50.207: INFO: stderr: "" Feb 2 23:00:50.207: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:50.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7999" for this suite. • [SLOW TEST:13.266 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":309,"completed":73,"skipped":1349,"failed":0} [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:50.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-fb4b83ef-775e-4178-adda-58627a895d92 STEP: Creating a pod to test consume secrets Feb 2 23:00:50.423: INFO: Waiting up to 5m0s for pod "pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362" in namespace "secrets-8972" to be "Succeeded or Failed" Feb 2 23:00:50.449: INFO: Pod "pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362": Phase="Pending", Reason="", readiness=false. Elapsed: 26.226494ms Feb 2 23:00:52.454: INFO: Pod "pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030998637s Feb 2 23:00:54.485: INFO: Pod "pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062250954s STEP: Saw pod success Feb 2 23:00:54.485: INFO: Pod "pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362" satisfied condition "Succeeded or Failed" Feb 2 23:00:54.488: INFO: Trying to get logs from node leguer-worker pod pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362 container secret-volume-test: STEP: delete the pod Feb 2 23:00:54.530: INFO: Waiting for pod pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362 to disappear Feb 2 23:00:54.541: INFO: Pod pod-secrets-555e0ae3-2f13-4280-ad35-0249f6cef362 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:54.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8972" for this suite. STEP: Destroying namespace "secret-namespace-4820" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":309,"completed":74,"skipped":1349,"failed":0} SSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:54.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events Feb 2 23:00:54.673: INFO: created test-event-1 Feb 2 23:00:54.679: INFO: created test-event-2 Feb 2 23:00:54.684: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Feb 2 23:00:54.690: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Feb 2 23:00:54.710: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:00:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3079" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":309,"completed":75,"skipped":1352,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:00:54.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 2 23:00:54.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4132 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Feb 2 23:00:54.998: INFO: stderr: "" Feb 2 23:00:54.998: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Feb 2 23:00:55.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4132 delete pods e2e-test-httpd-pod' Feb 2 23:01:50.101: INFO: stderr: "" Feb 2 23:01:50.101: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:01:50.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4132" for this suite. • [SLOW TEST:55.366 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":309,"completed":76,"skipped":1365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:01:50.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 2 23:01:54.324: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:01:54.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3829" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":309,"completed":77,"skipped":1394,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:01:54.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-cf764373-53d1-422e-8c8b-862c569492b7 STEP: Creating a pod to test consume configMaps Feb 2 23:01:54.739: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672" in namespace "projected-658" to be "Succeeded or Failed" Feb 2 23:01:54.778: INFO: Pod "pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672": Phase="Pending", Reason="", readiness=false. Elapsed: 38.983985ms Feb 2 23:01:56.827: INFO: Pod "pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087932855s Feb 2 23:01:58.831: INFO: Pod "pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092410749s STEP: Saw pod success Feb 2 23:01:58.832: INFO: Pod "pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672" satisfied condition "Succeeded or Failed" Feb 2 23:01:58.834: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672 container agnhost-container: STEP: delete the pod Feb 2 23:01:58.907: INFO: Waiting for pod pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672 to disappear Feb 2 23:01:58.911: INFO: Pod pod-projected-configmaps-ffa0f66d-ad0b-4241-a934-e1620b12f672 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:01:58.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-658" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":78,"skipped":1405,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:01:58.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 2 23:01:59.061: INFO: Waiting up to 5m0s for pod "pod-85b4a8d0-1765-47ab-a89d-bc94f577101a" in namespace "emptydir-7345" to be "Succeeded or Failed" Feb 2 23:01:59.089: INFO: Pod "pod-85b4a8d0-1765-47ab-a89d-bc94f577101a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.22658ms Feb 2 23:02:01.094: INFO: Pod "pod-85b4a8d0-1765-47ab-a89d-bc94f577101a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033009963s Feb 2 23:02:03.098: INFO: Pod "pod-85b4a8d0-1765-47ab-a89d-bc94f577101a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036518626s Feb 2 23:02:05.104: INFO: Pod "pod-85b4a8d0-1765-47ab-a89d-bc94f577101a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042901268s STEP: Saw pod success Feb 2 23:02:05.104: INFO: Pod "pod-85b4a8d0-1765-47ab-a89d-bc94f577101a" satisfied condition "Succeeded or Failed" Feb 2 23:02:05.106: INFO: Trying to get logs from node leguer-worker pod pod-85b4a8d0-1765-47ab-a89d-bc94f577101a container test-container: STEP: delete the pod Feb 2 23:02:05.147: INFO: Waiting for pod pod-85b4a8d0-1765-47ab-a89d-bc94f577101a to disappear Feb 2 23:02:05.159: INFO: Pod pod-85b4a8d0-1765-47ab-a89d-bc94f577101a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:05.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7345" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":79,"skipped":1408,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:05.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating cluster-info Feb 2 23:02:05.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6908 cluster-info' Feb 2 23:02:05.335: INFO: stderr: "" Feb 2 23:02:05.335: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34747\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:05.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6908" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":309,"completed":80,"skipped":1411,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:05.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:02:05.396: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:11.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1647" for this suite. • [SLOW TEST:6.531 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":309,"completed":81,"skipped":1418,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:11.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-3d67234f-6ebb-4655-87c6-42452d1668a1 STEP: Creating a pod to test consume secrets Feb 2 23:02:11.976: INFO: Waiting up to 5m0s for pod "pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c" in namespace "secrets-3105" to be "Succeeded or Failed" Feb 2 23:02:11.979: INFO: Pod "pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.574527ms Feb 2 23:02:14.102: INFO: Pod "pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12660482s Feb 2 23:02:16.107: INFO: Pod "pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c": Phase="Running", Reason="", readiness=true. Elapsed: 4.131504917s Feb 2 23:02:18.115: INFO: Pod "pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13946527s STEP: Saw pod success Feb 2 23:02:18.115: INFO: Pod "pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c" satisfied condition "Succeeded or Failed" Feb 2 23:02:18.119: INFO: Trying to get logs from node leguer-worker pod pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c container secret-volume-test: STEP: delete the pod Feb 2 23:02:18.148: INFO: Waiting for pod pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c to disappear Feb 2 23:02:18.159: INFO: Pod pod-secrets-b042eb3e-0034-4397-94cd-0cea13a4f13c no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:18.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3105" for this suite. • [SLOW TEST:6.292 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":82,"skipped":1423,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:18.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5572 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5572 I0202 23:02:18.418242 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5572, replica count: 2 I0202 23:02:21.468637 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:02:24.468925 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:02:24.469: INFO: Creating new exec pod Feb 2 23:02:29.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5572 exec execpod5g6fz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 2 23:02:29.736: INFO: stderr: "I0202 23:02:29.638277 731 log.go:181] (0xc00003a420) (0xc0009381e0) Create stream\nI0202 23:02:29.638335 731 log.go:181] (0xc00003a420) (0xc0009381e0) Stream added, broadcasting: 1\nI0202 23:02:29.640039 731 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:02:29.640088 731 log.go:181] (0xc00003a420) (0xc0002fc320) Create stream\nI0202 23:02:29.640106 731 log.go:181] (0xc00003a420) (0xc0002fc320) Stream added, broadcasting: 3\nI0202 23:02:29.641196 731 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:02:29.641236 731 log.go:181] (0xc00003a420) (0xc00019f680) Create stream\nI0202 23:02:29.641246 731 log.go:181] (0xc00003a420) (0xc00019f680) Stream added, broadcasting: 5\nI0202 23:02:29.642048 731 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:02:29.727524 731 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:02:29.727555 731 log.go:181] (0xc00019f680) (5) Data frame handling\nI0202 23:02:29.727570 731 log.go:181] (0xc00019f680) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0202 23:02:29.728512 731 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:02:29.728538 731 log.go:181] (0xc00019f680) (5) Data frame handling\nI0202 23:02:29.728562 731 log.go:181] (0xc00019f680) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0202 23:02:29.728804 731 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:02:29.728958 731 log.go:181] (0xc00019f680) (5) Data frame handling\nI0202 23:02:29.729278 731 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:02:29.729298 731 log.go:181] (0xc0002fc320) (3) Data frame handling\nI0202 23:02:29.730740 731 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:02:29.730762 731 log.go:181] (0xc0009381e0) (1) Data frame handling\nI0202 23:02:29.730787 731 log.go:181] (0xc0009381e0) (1) Data frame sent\nI0202 23:02:29.730798 731 log.go:181] (0xc00003a420) (0xc0009381e0) Stream removed, broadcasting: 1\nI0202 23:02:29.730949 731 log.go:181] (0xc00003a420) Go away received\nI0202 23:02:29.731135 731 log.go:181] (0xc00003a420) (0xc0009381e0) Stream removed, broadcasting: 1\nI0202 23:02:29.731151 731 log.go:181] (0xc00003a420) (0xc0002fc320) Stream removed, broadcasting: 3\nI0202 23:02:29.731157 731 log.go:181] (0xc00003a420) (0xc00019f680) Stream removed, broadcasting: 5\n" Feb 2 23:02:29.737: INFO: stdout: "" Feb 2 23:02:29.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5572 exec execpod5g6fz -- /bin/sh -x -c nc -zv -t -w 2 10.96.205.103 80' Feb 2 23:02:29.970: INFO: stderr: "I0202 23:02:29.890415 749 log.go:181] (0xc00003a420) (0xc000bce1e0) Create stream\nI0202 23:02:29.890481 749 log.go:181] (0xc00003a420) (0xc000bce1e0) Stream added, broadcasting: 1\nI0202 23:02:29.892324 749 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:02:29.892357 749 log.go:181] (0xc00003a420) (0xc000390820) Create stream\nI0202 23:02:29.892367 749 log.go:181] (0xc00003a420) (0xc000390820) Stream added, broadcasting: 3\nI0202 23:02:29.893383 749 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:02:29.893438 749 log.go:181] (0xc00003a420) (0xc000f02000) Create stream\nI0202 23:02:29.893454 749 log.go:181] (0xc00003a420) (0xc000f02000) Stream added, broadcasting: 5\nI0202 23:02:29.894403 749 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:02:29.961868 749 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:02:29.961927 749 log.go:181] (0xc000390820) (3) Data frame handling\nI0202 23:02:29.961965 749 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:02:29.962009 749 log.go:181] (0xc000f02000) (5) Data frame handling\nI0202 23:02:29.962039 749 log.go:181] (0xc000f02000) (5) Data frame sent\nI0202 23:02:29.962059 749 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:02:29.962079 749 log.go:181] (0xc000f02000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.205.103 80\nConnection to 10.96.205.103 80 port [tcp/http] succeeded!\nI0202 23:02:29.963429 749 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:02:29.963469 749 log.go:181] (0xc000bce1e0) (1) Data frame handling\nI0202 23:02:29.963491 749 log.go:181] (0xc000bce1e0) (1) Data frame sent\nI0202 23:02:29.963509 749 log.go:181] (0xc00003a420) (0xc000bce1e0) Stream removed, broadcasting: 1\nI0202 23:02:29.963595 749 log.go:181] (0xc00003a420) Go away received\nI0202 23:02:29.963995 749 log.go:181] (0xc00003a420) (0xc000bce1e0) Stream removed, broadcasting: 1\nI0202 23:02:29.964024 749 log.go:181] (0xc00003a420) (0xc000390820) Stream removed, broadcasting: 3\nI0202 23:02:29.964036 749 log.go:181] (0xc00003a420) (0xc000f02000) Stream removed, broadcasting: 5\n" Feb 2 23:02:29.970: INFO: stdout: "" Feb 2 23:02:29.970: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:29.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5572" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:11.869 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":309,"completed":83,"skipped":1435,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:30.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:02:30.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:02:32.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 23:02:34.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903750, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:02:37.815: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:02:37.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7374" for this suite. STEP: Destroying namespace "webhook-7374-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.197 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":309,"completed":84,"skipped":1448,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:39.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:02:39.363: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 2 23:02:41.424: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:42.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4932" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":309,"completed":85,"skipped":1462,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:42.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:02:43.969: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:02:45.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903764, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903764, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903764, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903763, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 23:02:47.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903764, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903764, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903764, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747903763, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:02:51.035: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:02:51.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4104-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:02:52.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4466" for this suite. STEP: Destroying namespace "webhook-4466-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.744 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":309,"completed":86,"skipped":1463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:02:52.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Feb 2 23:02:56.891: INFO: Successfully updated pod "labelsupdate81ba7897-f32a-4fe3-939c-57122dd25ca6" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:03:00.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2080" for this suite. • [SLOW TEST:8.668 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":87,"skipped":1518,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:03:00.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-a5c3d466-39fa-4f25-8c62-ed72f633e8fb STEP: Creating a pod to test consume configMaps Feb 2 23:03:01.055: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2" in namespace "projected-2274" to be "Succeeded or Failed" Feb 2 23:03:01.059: INFO: Pod "pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712332ms Feb 2 23:03:03.063: INFO: Pod "pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008828146s Feb 2 23:03:05.069: INFO: Pod "pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014658918s STEP: Saw pod success Feb 2 23:03:05.069: INFO: Pod "pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2" satisfied condition "Succeeded or Failed" Feb 2 23:03:05.073: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2 container agnhost-container: STEP: delete the pod Feb 2 23:03:05.120: INFO: Waiting for pod pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2 to disappear Feb 2 23:03:05.131: INFO: Pod pod-projected-configmaps-58cce3f7-34d3-406d-b26a-5c80b67741f2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:03:05.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2274" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":88,"skipped":1523,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:03:05.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:03:05.218: INFO: Creating deployment "webserver-deployment" Feb 2 23:03:05.234: INFO: Waiting for observed generation 1 Feb 2 23:03:07.282: INFO: Waiting for all required pods to come up Feb 2 23:03:07.288: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 2 23:03:17.299: INFO: Waiting for deployment "webserver-deployment" to complete Feb 2 23:03:17.304: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 2 23:03:17.311: INFO: Updating deployment webserver-deployment Feb 2 23:03:17.311: INFO: Waiting for observed generation 2 Feb 2 23:03:19.371: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 2 23:03:19.374: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 2 23:03:19.407: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 2 23:03:19.647: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 2 23:03:19.647: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 2 23:03:19.649: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 2 23:03:19.653: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 2 23:03:19.653: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 2 23:03:19.660: INFO: Updating deployment webserver-deployment Feb 2 23:03:19.660: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 2 23:03:20.200: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 2 23:03:22.735: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 2 23:03:22.936: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-991 a5ec796c-2792-4631-a4d6-00bad5bb26ea 4176028 3 2021-02-02 23:03:05 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00342b098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-02 23:03:19 +0000 UTC,LastTransitionTime:2021-02-02 23:03:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-02-02 23:03:20 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 2 23:03:23.326: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-991 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 4176016 3 2021-02-02 23:03:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a5ec796c-2792-4631-a4d6-00bad5bb26ea 0xc00342b6a7 0xc00342b6a8}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5ec796c-2792-4631-a4d6-00bad5bb26ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00342b728 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:03:23.326: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 2 23:03:23.326: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-991 e01d96f8-a249-4ad5-94f6-7253d0aa3186 4176024 3 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a5ec796c-2792-4631-a4d6-00bad5bb26ea 0xc00342b787 0xc00342b788}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:03:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a5ec796c-2792-4631-a4d6-00bad5bb26ea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00342b7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:03:23.522: INFO: Pod "webserver-deployment-795d758f88-4tgqw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4tgqw webserver-deployment-795d758f88- deployment-991 f86a4d7e-b5d0-47e6-8fb0-c0a9c0162204 4176055 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc00342bc37 0xc00342bc38}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.522: INFO: Pod "webserver-deployment-795d758f88-7hlz4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7hlz4 webserver-deployment-795d758f88- deployment-991 865a1a5c-d43c-46c5-90b6-c3312ac640ec 4175934 0 2021-02-02 23:03:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc00342bee7 0xc00342bee8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.523: INFO: Pod "webserver-deployment-795d758f88-c4fq5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c4fq5 webserver-deployment-795d758f88- deployment-991 f26f14ac-8df7-4756-b694-070918bf603b 4176031 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e4137 0xc0024e4138}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.523: INFO: Pod "webserver-deployment-795d758f88-d79wq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-d79wq webserver-deployment-795d758f88- deployment-991 5417223e-ced6-435f-8ccf-14c42faf0c1c 4176052 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e4457 0xc0024e4458}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.524: INFO: Pod "webserver-deployment-795d758f88-f5x5x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-f5x5x webserver-deployment-795d758f88- deployment-991 14761259-d4b5-4df7-ac18-f9d7052f2f86 4175945 0 2021-02-02 23:03:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e4877 0xc0024e4878}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.525: INFO: Pod "webserver-deployment-795d758f88-kq5bq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kq5bq webserver-deployment-795d758f88- deployment-991 79bc7cc0-241b-4850-9ce4-e32357adaf5a 4176062 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e4b17 0xc0024e4b18}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.525: INFO: Pod "webserver-deployment-795d758f88-l26lc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-l26lc webserver-deployment-795d758f88- deployment-991 e9b10164-653b-488b-990c-136ce462205b 4175947 0 2021-02-02 23:03:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e4f27 0xc0024e4f28}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.525: INFO: Pod "webserver-deployment-795d758f88-mlb5d" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mlb5d webserver-deployment-795d758f88- deployment-991 a1ea6ca4-3236-4b8d-9a88-97adce514f2c 4176037 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e52a7 0xc0024e52a8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.525: INFO: Pod "webserver-deployment-795d758f88-pcsqz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pcsqz webserver-deployment-795d758f88- deployment-991 06fc2808-034f-4cf1-a974-7db4d4841818 4176013 0 2021-02-02 23:03:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e5477 0xc0024e5478}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.526: INFO: Pod "webserver-deployment-795d758f88-qw66h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qw66h webserver-deployment-795d758f88- deployment-991 aee39c94-7c59-4e49-b081-b5f58a62133b 4175919 0 2021-02-02 23:03:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e5657 0xc0024e5658}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.526: INFO: Pod "webserver-deployment-795d758f88-v8vrk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v8vrk webserver-deployment-795d758f88- deployment-991 06b3252e-af11-4925-a766-7dcab82d4a4e 4176038 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e5817 0xc0024e5818}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.526: INFO: Pod "webserver-deployment-795d758f88-w2glf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-w2glf webserver-deployment-795d758f88- deployment-991 4dbc372b-dad7-411f-831f-e96681ef26fe 4176089 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc0024e5dd7 0xc0024e5dd8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.526: INFO: Pod "webserver-deployment-795d758f88-zpltc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zpltc webserver-deployment-795d758f88- deployment-991 5d66f027-bae2-49cd-8070-b12bf433fbb7 4176085 0 2021-02-02 23:03:17 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17 0xc001594077 0xc001594078}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a4fbc31-a83d-4d1f-a02b-4cd5518c0e17\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.187\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.187,StartTime:2021-02-02 23:03:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.187,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.526: INFO: Pod "webserver-deployment-dd94f59b7-47mbs" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-47mbs webserver-deployment-dd94f59b7- deployment-991 b5b1d2c4-f66e-4c5b-a380-c022a13be8ae 4175840 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001594287 0xc001594288}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.234\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.234,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d74938cca274f33854cec8d596ba94847a6f3d0156322bd6d346555dddcadce1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.527: INFO: Pod "webserver-deployment-dd94f59b7-78ptd" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-78ptd webserver-deployment-dd94f59b7- deployment-991 47fb6de7-aefd-42f3-8b22-4c2890636efd 4175865 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001594447 0xc001594448}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.237\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.237,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8286eec8cad3d80523e8b46784cc767c28ec395c3299de373ea6d0284180c554,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.527: INFO: Pod "webserver-deployment-dd94f59b7-7j79r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7j79r webserver-deployment-dd94f59b7- deployment-991 f264abf7-4adb-4163-abd1-c5805a32ae40 4176070 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0015945f7 0xc0015945f8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.527: INFO: Pod "webserver-deployment-dd94f59b7-7xrfc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7xrfc webserver-deployment-dd94f59b7- deployment-991 f3c6de9d-6309-45c2-b6bc-384cd00d563f 4175889 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0015947b7 0xc0015947b8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.186\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.186,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://95b06220dcb0a48359014e8641c51108441c37593c6a9d178de6c14834419716,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.527: INFO: Pod "webserver-deployment-dd94f59b7-9vwr8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9vwr8 webserver-deployment-dd94f59b7- deployment-991 9b9ff554-9f84-409c-b629-aa60b4f1158d 4176066 0 2021-02-02 23:03:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001594977 0xc001594978}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.527: INFO: Pod "webserver-deployment-dd94f59b7-cs5dk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cs5dk webserver-deployment-dd94f59b7- deployment-991 e14503fc-d99f-41c5-8f33-542128a813de 4175892 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001594e47 0xc001594e48}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.185\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.185,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c94222e13da4775733647c706e410e3ab841eaaa560c1af919d0e00ce28bcfcb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.185,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.528: INFO: Pod "webserver-deployment-dd94f59b7-cxmjz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cxmjz webserver-deployment-dd94f59b7- deployment-991 8b6124fa-a6fd-4e6c-b4ec-dcb0281a69a5 4176076 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0015951d7 0xc0015951d8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.528: INFO: Pod "webserver-deployment-dd94f59b7-fkmn2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fkmn2 webserver-deployment-dd94f59b7- deployment-991 37149f92-00a1-4dea-8146-4ccbd0bb7cb4 4176084 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0015953b7 0xc0015953b8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.528: INFO: Pod "webserver-deployment-dd94f59b7-gvn5d" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gvn5d webserver-deployment-dd94f59b7- deployment-991 4c256fc0-ca3f-484b-bad0-846c1c88be02 4176022 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001595557 0xc001595558}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.528: INFO: Pod "webserver-deployment-dd94f59b7-jpmkh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jpmkh webserver-deployment-dd94f59b7- deployment-991 9e07528c-69af-4941-a3c8-4ba6fe0f8013 4176073 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0015956f7 0xc0015956f8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.528: INFO: Pod "webserver-deployment-dd94f59b7-jwbgk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jwbgk webserver-deployment-dd94f59b7- deployment-991 7d0f7678-b8ad-45d0-ab33-dace2cd8ed4f 4176020 0 2021-02-02 23:03:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0015958a7 0xc0015958a8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.529: INFO: Pod "webserver-deployment-dd94f59b7-kfbzk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kfbzk webserver-deployment-dd94f59b7- deployment-991 836cb283-fafe-4319-9c12-1ad61ac01c12 4175869 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001595a47 0xc001595a48}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.236\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.236,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3d44f2df551d4ea6da9a93c04bfc95bc30cd23896d1a8e630b538759ea2e36d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.529: INFO: Pod "webserver-deployment-dd94f59b7-pf297" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pf297 webserver-deployment-dd94f59b7- deployment-991 283b22db-f426-46be-81ad-3003473d9b8e 4175996 0 2021-02-02 23:03:19 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001595c47 0xc001595c48}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.529: INFO: Pod "webserver-deployment-dd94f59b7-rlrkv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rlrkv webserver-deployment-dd94f59b7- deployment-991 89114f69-4862-41c3-9d5f-b2ce41ff7eb6 4176050 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001595dd7 0xc001595dd8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.529: INFO: Pod "webserver-deployment-dd94f59b7-tkxg7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tkxg7 webserver-deployment-dd94f59b7- deployment-991 6915f90b-b357-40ad-8616-1a063bd08ceb 4176046 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc001595f97 0xc001595f98}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.529: INFO: Pod "webserver-deployment-dd94f59b7-v8vrl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v8vrl webserver-deployment-dd94f59b7- deployment-991 ace6ed64-933e-44d3-9090-15a2c2a858db 4175832 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc000716137 0xc000716138}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.182,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://96959b04c4ae56a73997431278886868a3761d4acf8b7b0d7dc3b89abf563273,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.529: INFO: Pod "webserver-deployment-dd94f59b7-xl29h" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xl29h webserver-deployment-dd94f59b7- deployment-991 14efbda4-75ab-4433-b475-6fc609efeeb3 4175886 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc000716387 0xc000716388}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.184,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://23e43a6961191224568b77b6612c26dbaec7cdfa630b4f39a6dee0cbee6f760a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.530: INFO: Pod "webserver-deployment-dd94f59b7-xq8gn" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xq8gn webserver-deployment-dd94f59b7- deployment-991 23613dc4-6f2d-4fed-b29e-52e4df965b48 4175867 0 2021-02-02 23:03:05 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc000716567 0xc000716568}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.183,StartTime:2021-02-02 23:03:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:03:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://45ffcb653cd596005216d70847bafa41e6717d71e627f002ebab92877aa7f489,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.530: INFO: Pod "webserver-deployment-dd94f59b7-zx2bl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zx2bl webserver-deployment-dd94f59b7- deployment-991 5497fa40-4176-4dd5-9e41-87dd0e582a6b 4176045 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc0007167e7 0xc0007167e8}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:03:23.530: INFO: Pod "webserver-deployment-dd94f59b7-zxtdt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zxtdt webserver-deployment-dd94f59b7- deployment-991 cdf25a0b-e394-4c94-898d-1b3e11b1d1c0 4176029 0 2021-02-02 23:03:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 e01d96f8-a249-4ad5-94f6-7253d0aa3186 0xc000716997 0xc000716998}] [] [{kube-controller-manager Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e01d96f8-a249-4ad5-94f6-7253d0aa3186\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:03:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n9kxl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n9kxl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n9kxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-02-02 23:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:03:23.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-991" for this suite. • [SLOW TEST:18.556 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":309,"completed":89,"skipped":1525,"failed":0} [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:03:23.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:03:40.326: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:40.348: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:40.361: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:40.366: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:40.529: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:40.922: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:40.966: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:41.604: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:42.288: INFO: Lookups using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local] Feb 2 23:03:47.355: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.359: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.362: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.365: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.373: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.376: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.379: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.381: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:47.387: INFO: Lookups using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local] Feb 2 23:03:52.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.297: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.350: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.354: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.364: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.366: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.373: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:52.753: INFO: Lookups using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local] Feb 2 23:03:57.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.297: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.301: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.304: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.314: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.317: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.320: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.324: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:03:57.329: INFO: Lookups using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local] Feb 2 23:04:02.292: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.296: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.300: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.303: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.311: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.314: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.317: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.320: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:02.326: INFO: Lookups using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local] Feb 2 23:04:07.293: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.297: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.301: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.304: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.314: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.317: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.321: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.325: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local from pod dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509: the server could not find the requested resource (get pods dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509) Feb 2 23:04:07.331: INFO: Lookups using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5505.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5505.svc.cluster.local jessie_udp@dns-test-service-2.dns-5505.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5505.svc.cluster.local] Feb 2 23:04:12.328: INFO: DNS probes using dns-5505/dns-test-b506743f-2ecc-49b8-8cef-ffc80476a509 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:04:12.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5505" for this suite. • [SLOW TEST:49.349 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":309,"completed":90,"skipped":1525,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:04:13.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service multi-endpoint-test in namespace services-7752 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7752 to expose endpoints map[] Feb 2 23:04:13.228: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Feb 2 23:04:14.258: INFO: successfully validated that service multi-endpoint-test in namespace services-7752 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7752 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7752 to expose endpoints map[pod1:[100]] Feb 2 23:04:18.410: INFO: successfully validated that service multi-endpoint-test in namespace services-7752 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7752 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7752 to expose endpoints map[pod1:[100] pod2:[101]] Feb 2 23:04:21.468: INFO: successfully validated that service multi-endpoint-test in namespace services-7752 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7752 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7752 to expose endpoints map[pod2:[101]] Feb 2 23:04:21.541: INFO: successfully validated that service multi-endpoint-test in namespace services-7752 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7752 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7752 to expose endpoints map[] Feb 2 23:04:21.998: INFO: successfully validated that service multi-endpoint-test in namespace services-7752 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:04:22.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7752" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:9.202 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":309,"completed":91,"skipped":1527,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:04:22.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 2 23:04:22.939: INFO: Waiting up to 5m0s for pod "pod-984422cd-e36d-437e-8054-676e11b83f2e" in namespace "emptydir-3023" to be "Succeeded or Failed" Feb 2 23:04:23.067: INFO: Pod "pod-984422cd-e36d-437e-8054-676e11b83f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 128.295894ms Feb 2 23:04:25.071: INFO: Pod "pod-984422cd-e36d-437e-8054-676e11b83f2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132665823s Feb 2 23:04:27.076: INFO: Pod "pod-984422cd-e36d-437e-8054-676e11b83f2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136856336s STEP: Saw pod success Feb 2 23:04:27.076: INFO: Pod "pod-984422cd-e36d-437e-8054-676e11b83f2e" satisfied condition "Succeeded or Failed" Feb 2 23:04:27.078: INFO: Trying to get logs from node leguer-worker pod pod-984422cd-e36d-437e-8054-676e11b83f2e container test-container: STEP: delete the pod Feb 2 23:04:27.238: INFO: Waiting for pod pod-984422cd-e36d-437e-8054-676e11b83f2e to disappear Feb 2 23:04:27.255: INFO: Pod pod-984422cd-e36d-437e-8054-676e11b83f2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:04:27.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3023" for this suite. • [SLOW TEST:5.073 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":92,"skipped":1533,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:04:27.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 2 23:04:27.443: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:04:27.456: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:04:27.461: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Feb 2 23:04:27.470: INFO: rally-0a12c122-7dnmol6z-vwbwf from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-0a12c122-fagfvvpw-sskvj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:54 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-0a12c122-iqj2mcat-2hfpj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-0a12c122-iqj2mcat-swp7f from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:04:27.470: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:04:27.470: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container chaos-mesh ready: true, restart count 0 Feb 2 23:04:27.470: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:04:27.470: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:04:27.470: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.470: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:04:27.470: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Feb 2 23:04:27.477: INFO: rally-0a12c122-4xacdhsf-44v5r from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-0a12c122-4xacdhsf-5c974 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-0a12c122-7dnmol6z-n9ztn from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-0a12c122-fagfvvpw-cxsgt from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:53 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-0a12c122-lqiac6cu-6fsz6 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-0a12c122-lqiac6cu-99jsp from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:04:27.477: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:04:27.477: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:04:27.477: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:04:27.477: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Feb 2 23:04:27.477: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-67723f37-7b58-4263-a89d-e3f360d934ac 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.13 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.13 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 2 23:04:47.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:47.694: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:47.725090 7 log.go:181] (0xc004450370) (0xc001edc5a0) Create stream I0202 23:04:47.725120 7 log.go:181] (0xc004450370) (0xc001edc5a0) Stream added, broadcasting: 1 I0202 23:04:47.727803 7 log.go:181] (0xc004450370) Reply frame received for 1 I0202 23:04:47.727856 7 log.go:181] (0xc004450370) (0xc0074ba280) Create stream I0202 23:04:47.727869 7 log.go:181] (0xc004450370) (0xc0074ba280) Stream added, broadcasting: 3 I0202 23:04:47.729322 7 log.go:181] (0xc004450370) Reply frame received for 3 I0202 23:04:47.729359 7 log.go:181] (0xc004450370) (0xc0074ba3c0) Create stream I0202 23:04:47.729373 7 log.go:181] (0xc004450370) (0xc0074ba3c0) Stream added, broadcasting: 5 I0202 23:04:47.730378 7 log.go:181] (0xc004450370) Reply frame received for 5 I0202 23:04:47.812981 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:04:47.813019 7 log.go:181] (0xc0074ba3c0) (5) Data frame handling I0202 23:04:47.813045 7 log.go:181] (0xc0074ba3c0) (5) Data frame sent I0202 23:04:47.813134 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:04:47.813157 7 log.go:181] (0xc0074ba3c0) (5) Data frame handling I0202 23:04:47.813177 7 log.go:181] (0xc0074ba3c0) (5) Data frame sent I0202 23:04:47.813186 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:04:47.813191 7 log.go:181] (0xc0074ba3c0) (5) Data frame handling I0202 23:04:47.813210 7 log.go:181] (0xc0074ba3c0) (5) Data frame sent I0202 23:04:47.813217 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:04:47.813222 7 log.go:181] (0xc0074ba3c0) (5) Data frame handling I0202 23:04:47.813227 7 log.go:181] (0xc0074ba3c0) (5) Data frame sent I0202 23:04:47.813635 7 log.go:181] (0xc004450370) Data frame received for 3 I0202 23:04:47.813668 7 log.go:181] (0xc0074ba280) (3) Data frame handling I0202 23:04:47.813683 7 log.go:181] (0xc0074ba280) (3) Data frame sent I0202 23:04:47.813699 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:04:47.813711 7 log.go:181] (0xc0074ba3c0) (5) Data frame handling I0202 23:04:47.813724 7 log.go:181] (0xc0074ba3c0) (5) Data frame sent I0202 23:04:47.814181 7 log.go:181] (0xc004450370) Data frame received for 3 I0202 23:04:47.814196 7 log.go:181] (0xc0074ba280) (3) Data frame handling I0202 23:04:47.814412 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:04:47.814439 7 log.go:181] (0xc0074ba3c0) (5) Data frame handling I0202 23:04:47.816127 7 log.go:181] (0xc004450370) Data frame received for 1 I0202 23:04:47.816207 7 log.go:181] (0xc001edc5a0) (1) Data frame handling I0202 23:04:47.816241 7 log.go:181] (0xc001edc5a0) (1) Data frame sent I0202 23:04:47.816258 7 log.go:181] (0xc004450370) (0xc001edc5a0) Stream removed, broadcasting: 1 I0202 23:04:47.816278 7 log.go:181] (0xc004450370) Go away received I0202 23:04:47.816345 7 log.go:181] (0xc004450370) (0xc001edc5a0) Stream removed, broadcasting: 1 I0202 23:04:47.816365 7 log.go:181] (0xc004450370) (0xc0074ba280) Stream removed, broadcasting: 3 I0202 23:04:47.816394 7 log.go:181] (0xc004450370) (0xc0074ba3c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Feb 2 23:04:47.816: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:47.816: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:47.851966 7 log.go:181] (0xc000e28580) (0xc002418320) Create stream I0202 23:04:47.851994 7 log.go:181] (0xc000e28580) (0xc002418320) Stream added, broadcasting: 1 I0202 23:04:47.854530 7 log.go:181] (0xc000e28580) Reply frame received for 1 I0202 23:04:47.854570 7 log.go:181] (0xc000e28580) (0xc00385c000) Create stream I0202 23:04:47.854587 7 log.go:181] (0xc000e28580) (0xc00385c000) Stream added, broadcasting: 3 I0202 23:04:47.855667 7 log.go:181] (0xc000e28580) Reply frame received for 3 I0202 23:04:47.855700 7 log.go:181] (0xc000e28580) (0xc003f18320) Create stream I0202 23:04:47.855713 7 log.go:181] (0xc000e28580) (0xc003f18320) Stream added, broadcasting: 5 I0202 23:04:47.856513 7 log.go:181] (0xc000e28580) Reply frame received for 5 I0202 23:04:47.936585 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.936640 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.936677 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.936747 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.936789 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.936942 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.936981 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937011 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937052 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937079 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937105 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937199 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937228 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937239 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937260 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937273 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937282 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937296 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937309 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937321 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937356 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937387 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937407 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937422 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937459 7 log.go:181] (0xc000e28580) Data frame received for 3 I0202 23:04:47.937525 7 log.go:181] (0xc00385c000) (3) Data frame handling I0202 23:04:47.937550 7 log.go:181] (0xc00385c000) (3) Data frame sent I0202 23:04:47.937613 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937659 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937690 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.937710 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.937733 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.937761 7 log.go:181] (0xc003f18320) (5) Data frame sent I0202 23:04:47.938324 7 log.go:181] (0xc000e28580) Data frame received for 3 I0202 23:04:47.938338 7 log.go:181] (0xc00385c000) (3) Data frame handling I0202 23:04:47.938367 7 log.go:181] (0xc000e28580) Data frame received for 5 I0202 23:04:47.938396 7 log.go:181] (0xc003f18320) (5) Data frame handling I0202 23:04:47.939559 7 log.go:181] (0xc000e28580) Data frame received for 1 I0202 23:04:47.939572 7 log.go:181] (0xc002418320) (1) Data frame handling I0202 23:04:47.939579 7 log.go:181] (0xc002418320) (1) Data frame sent I0202 23:04:47.939802 7 log.go:181] (0xc000e28580) (0xc002418320) Stream removed, broadcasting: 1 I0202 23:04:47.939834 7 log.go:181] (0xc000e28580) Go away received I0202 23:04:47.939943 7 log.go:181] (0xc000e28580) (0xc002418320) Stream removed, broadcasting: 1 I0202 23:04:47.939971 7 log.go:181] (0xc000e28580) (0xc00385c000) Stream removed, broadcasting: 3 I0202 23:04:47.939985 7 log.go:181] (0xc000e28580) (0xc003f18320) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Feb 2 23:04:47.940: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:47.940: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:47.977028 7 log.go:181] (0xc005d4c000) (0xc0074ba6e0) Create stream I0202 23:04:47.977066 7 log.go:181] (0xc005d4c000) (0xc0074ba6e0) Stream added, broadcasting: 1 I0202 23:04:47.979629 7 log.go:181] (0xc005d4c000) Reply frame received for 1 I0202 23:04:47.979684 7 log.go:181] (0xc005d4c000) (0xc0074ba780) Create stream I0202 23:04:47.979760 7 log.go:181] (0xc005d4c000) (0xc0074ba780) Stream added, broadcasting: 3 I0202 23:04:47.980664 7 log.go:181] (0xc005d4c000) Reply frame received for 3 I0202 23:04:47.980711 7 log.go:181] (0xc005d4c000) (0xc00385c0a0) Create stream I0202 23:04:47.980733 7 log.go:181] (0xc005d4c000) (0xc00385c0a0) Stream added, broadcasting: 5 I0202 23:04:47.981764 7 log.go:181] (0xc005d4c000) Reply frame received for 5 I0202 23:04:53.064747 7 log.go:181] (0xc005d4c000) Data frame received for 3 I0202 23:04:53.064939 7 log.go:181] (0xc0074ba780) (3) Data frame handling I0202 23:04:53.064989 7 log.go:181] (0xc005d4c000) Data frame received for 5 I0202 23:04:53.065020 7 log.go:181] (0xc00385c0a0) (5) Data frame handling I0202 23:04:53.065049 7 log.go:181] (0xc00385c0a0) (5) Data frame sent I0202 23:04:53.065068 7 log.go:181] (0xc005d4c000) Data frame received for 5 I0202 23:04:53.065080 7 log.go:181] (0xc00385c0a0) (5) Data frame handling I0202 23:04:53.066555 7 log.go:181] (0xc005d4c000) Data frame received for 1 I0202 23:04:53.066589 7 log.go:181] (0xc0074ba6e0) (1) Data frame handling I0202 23:04:53.066618 7 log.go:181] (0xc0074ba6e0) (1) Data frame sent I0202 23:04:53.066717 7 log.go:181] (0xc005d4c000) (0xc0074ba6e0) Stream removed, broadcasting: 1 I0202 23:04:53.066769 7 log.go:181] (0xc005d4c000) Go away received I0202 23:04:53.066869 7 log.go:181] (0xc005d4c000) (0xc0074ba6e0) Stream removed, broadcasting: 1 I0202 23:04:53.066896 7 log.go:181] (0xc005d4c000) (0xc0074ba780) Stream removed, broadcasting: 3 I0202 23:04:53.066920 7 log.go:181] (0xc005d4c000) (0xc00385c0a0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 2 23:04:53.066: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:53.067: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:53.103573 7 log.go:181] (0xc000816dc0) (0xc0019981e0) Create stream I0202 23:04:53.103598 7 log.go:181] (0xc000816dc0) (0xc0019981e0) Stream added, broadcasting: 1 I0202 23:04:53.105888 7 log.go:181] (0xc000816dc0) Reply frame received for 1 I0202 23:04:53.105916 7 log.go:181] (0xc000816dc0) (0xc003f183c0) Create stream I0202 23:04:53.105928 7 log.go:181] (0xc000816dc0) (0xc003f183c0) Stream added, broadcasting: 3 I0202 23:04:53.106928 7 log.go:181] (0xc000816dc0) Reply frame received for 3 I0202 23:04:53.106990 7 log.go:181] (0xc000816dc0) (0xc0024183c0) Create stream I0202 23:04:53.107018 7 log.go:181] (0xc000816dc0) (0xc0024183c0) Stream added, broadcasting: 5 I0202 23:04:53.107858 7 log.go:181] (0xc000816dc0) Reply frame received for 5 I0202 23:04:53.209056 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 23:04:53.209091 7 log.go:181] (0xc0024183c0) (5) Data frame handling I0202 23:04:53.209116 7 log.go:181] (0xc0024183c0) (5) Data frame sent I0202 23:04:53.209131 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 23:04:53.209145 7 log.go:181] (0xc0024183c0) (5) Data frame handling I0202 23:04:53.209166 7 log.go:181] (0xc0024183c0) (5) Data frame sent I0202 23:04:53.209180 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 23:04:53.209192 7 log.go:181] (0xc0024183c0) (5) Data frame handling I0202 23:04:53.209208 7 log.go:181] (0xc0024183c0) (5) Data frame sent I0202 23:04:53.209220 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 23:04:53.209232 7 log.go:181] (0xc0024183c0) (5) Data frame handling I0202 23:04:53.209244 7 log.go:181] (0xc0024183c0) (5) Data frame sent I0202 23:04:53.209683 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 23:04:53.209731 7 log.go:181] (0xc0024183c0) (5) Data frame handling I0202 23:04:53.209812 7 log.go:181] (0xc0024183c0) (5) Data frame sent I0202 23:04:53.209907 7 log.go:181] (0xc000816dc0) Data frame received for 3 I0202 23:04:53.209948 7 log.go:181] (0xc003f183c0) (3) Data frame handling I0202 23:04:53.209983 7 log.go:181] (0xc003f183c0) (3) Data frame sent I0202 23:04:53.210156 7 log.go:181] (0xc000816dc0) Data frame received for 5 I0202 23:04:53.210176 7 log.go:181] (0xc0024183c0) (5) Data frame handling I0202 23:04:53.210237 7 log.go:181] (0xc000816dc0) Data frame received for 3 I0202 23:04:53.210270 7 log.go:181] (0xc003f183c0) (3) Data frame handling I0202 23:04:53.211857 7 log.go:181] (0xc000816dc0) Data frame received for 1 I0202 23:04:53.211893 7 log.go:181] (0xc0019981e0) (1) Data frame handling I0202 23:04:53.211923 7 log.go:181] (0xc0019981e0) (1) Data frame sent I0202 23:04:53.211952 7 log.go:181] (0xc000816dc0) (0xc0019981e0) Stream removed, broadcasting: 1 I0202 23:04:53.211982 7 log.go:181] (0xc000816dc0) Go away received I0202 23:04:53.212092 7 log.go:181] (0xc000816dc0) (0xc0019981e0) Stream removed, broadcasting: 1 I0202 23:04:53.212121 7 log.go:181] (0xc000816dc0) (0xc003f183c0) Stream removed, broadcasting: 3 I0202 23:04:53.212135 7 log.go:181] (0xc000816dc0) (0xc0024183c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Feb 2 23:04:53.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:53.212: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:53.247499 7 log.go:181] (0xc005d4c6e0) (0xc0074baa00) Create stream I0202 23:04:53.247525 7 log.go:181] (0xc005d4c6e0) (0xc0074baa00) Stream added, broadcasting: 1 I0202 23:04:53.249757 7 log.go:181] (0xc005d4c6e0) Reply frame received for 1 I0202 23:04:53.249799 7 log.go:181] (0xc005d4c6e0) (0xc002418460) Create stream I0202 23:04:53.249814 7 log.go:181] (0xc005d4c6e0) (0xc002418460) Stream added, broadcasting: 3 I0202 23:04:53.250773 7 log.go:181] (0xc005d4c6e0) Reply frame received for 3 I0202 23:04:53.250794 7 log.go:181] (0xc005d4c6e0) (0xc001998280) Create stream I0202 23:04:53.250805 7 log.go:181] (0xc005d4c6e0) (0xc001998280) Stream added, broadcasting: 5 I0202 23:04:53.251531 7 log.go:181] (0xc005d4c6e0) Reply frame received for 5 I0202 23:04:53.314117 7 log.go:181] (0xc005d4c6e0) Data frame received for 5 I0202 23:04:53.314154 7 log.go:181] (0xc001998280) (5) Data frame handling I0202 23:04:53.314179 7 log.go:181] (0xc001998280) (5) Data frame sent I0202 23:04:53.314197 7 log.go:181] (0xc005d4c6e0) Data frame received for 5 I0202 23:04:53.314210 7 log.go:181] (0xc001998280) (5) Data frame handling I0202 23:04:53.314231 7 log.go:181] (0xc001998280) (5) Data frame sent I0202 23:04:53.314244 7 log.go:181] (0xc005d4c6e0) Data frame received for 5 I0202 23:04:53.314257 7 log.go:181] (0xc001998280) (5) Data frame handling I0202 23:04:53.314277 7 log.go:181] (0xc001998280) (5) Data frame sent I0202 23:04:53.314533 7 log.go:181] (0xc005d4c6e0) Data frame received for 5 I0202 23:04:53.314563 7 log.go:181] (0xc005d4c6e0) Data frame received for 3 I0202 23:04:53.314591 7 log.go:181] (0xc002418460) (3) Data frame handling I0202 23:04:53.314606 7 log.go:181] (0xc002418460) (3) Data frame sent I0202 23:04:53.314665 7 log.go:181] (0xc001998280) (5) Data frame handling I0202 23:04:53.314690 7 log.go:181] (0xc001998280) (5) Data frame sent I0202 23:04:53.314701 7 log.go:181] (0xc005d4c6e0) Data frame received for 5 I0202 23:04:53.314710 7 log.go:181] (0xc001998280) (5) Data frame handling I0202 23:04:53.314723 7 log.go:181] (0xc001998280) (5) Data frame sent I0202 23:04:53.315380 7 log.go:181] (0xc005d4c6e0) Data frame received for 3 I0202 23:04:53.315404 7 log.go:181] (0xc005d4c6e0) Data frame received for 5 I0202 23:04:53.315418 7 log.go:181] (0xc001998280) (5) Data frame handling I0202 23:04:53.315447 7 log.go:181] (0xc002418460) (3) Data frame handling I0202 23:04:53.316992 7 log.go:181] (0xc005d4c6e0) Data frame received for 1 I0202 23:04:53.317016 7 log.go:181] (0xc0074baa00) (1) Data frame handling I0202 23:04:53.317030 7 log.go:181] (0xc0074baa00) (1) Data frame sent I0202 23:04:53.317051 7 log.go:181] (0xc005d4c6e0) (0xc0074baa00) Stream removed, broadcasting: 1 I0202 23:04:53.317107 7 log.go:181] (0xc005d4c6e0) Go away received I0202 23:04:53.317212 7 log.go:181] (0xc005d4c6e0) (0xc0074baa00) Stream removed, broadcasting: 1 I0202 23:04:53.317236 7 log.go:181] (0xc005d4c6e0) (0xc002418460) Stream removed, broadcasting: 3 I0202 23:04:53.317255 7 log.go:181] (0xc005d4c6e0) (0xc001998280) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Feb 2 23:04:53.317: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:53.317: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:53.351369 7 log.go:181] (0xc00229e4d0) (0xc003f186e0) Create stream I0202 23:04:53.351419 7 log.go:181] (0xc00229e4d0) (0xc003f186e0) Stream added, broadcasting: 1 I0202 23:04:53.355805 7 log.go:181] (0xc00229e4d0) Reply frame received for 1 I0202 23:04:53.355927 7 log.go:181] (0xc00229e4d0) (0xc003f18780) Create stream I0202 23:04:53.355952 7 log.go:181] (0xc00229e4d0) (0xc003f18780) Stream added, broadcasting: 3 I0202 23:04:53.357242 7 log.go:181] (0xc00229e4d0) Reply frame received for 3 I0202 23:04:53.357288 7 log.go:181] (0xc00229e4d0) (0xc002418500) Create stream I0202 23:04:53.357313 7 log.go:181] (0xc00229e4d0) (0xc002418500) Stream added, broadcasting: 5 I0202 23:04:53.359737 7 log.go:181] (0xc00229e4d0) Reply frame received for 5 I0202 23:04:58.420526 7 log.go:181] (0xc00229e4d0) Data frame received for 5 I0202 23:04:58.420660 7 log.go:181] (0xc002418500) (5) Data frame handling I0202 23:04:58.420707 7 log.go:181] (0xc002418500) (5) Data frame sent I0202 23:04:58.420749 7 log.go:181] (0xc00229e4d0) Data frame received for 5 I0202 23:04:58.420778 7 log.go:181] (0xc002418500) (5) Data frame handling I0202 23:04:58.420905 7 log.go:181] (0xc00229e4d0) Data frame received for 3 I0202 23:04:58.420956 7 log.go:181] (0xc003f18780) (3) Data frame handling I0202 23:04:58.422947 7 log.go:181] (0xc00229e4d0) Data frame received for 1 I0202 23:04:58.422993 7 log.go:181] (0xc003f186e0) (1) Data frame handling I0202 23:04:58.423018 7 log.go:181] (0xc003f186e0) (1) Data frame sent I0202 23:04:58.423042 7 log.go:181] (0xc00229e4d0) (0xc003f186e0) Stream removed, broadcasting: 1 I0202 23:04:58.423184 7 log.go:181] (0xc00229e4d0) (0xc003f186e0) Stream removed, broadcasting: 1 I0202 23:04:58.423216 7 log.go:181] (0xc00229e4d0) (0xc003f18780) Stream removed, broadcasting: 3 I0202 23:04:58.423244 7 log.go:181] (0xc00229e4d0) (0xc002418500) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 2 23:04:58.423: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:58.423: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:58.423353 7 log.go:181] (0xc00229e4d0) Go away received I0202 23:04:58.460970 7 log.go:181] (0xc000e28dc0) (0xc002418780) Create stream I0202 23:04:58.461004 7 log.go:181] (0xc000e28dc0) (0xc002418780) Stream added, broadcasting: 1 I0202 23:04:58.462768 7 log.go:181] (0xc000e28dc0) Reply frame received for 1 I0202 23:04:58.462803 7 log.go:181] (0xc000e28dc0) (0xc001edc640) Create stream I0202 23:04:58.462816 7 log.go:181] (0xc000e28dc0) (0xc001edc640) Stream added, broadcasting: 3 I0202 23:04:58.463664 7 log.go:181] (0xc000e28dc0) Reply frame received for 3 I0202 23:04:58.463697 7 log.go:181] (0xc000e28dc0) (0xc0074baaa0) Create stream I0202 23:04:58.463707 7 log.go:181] (0xc000e28dc0) (0xc0074baaa0) Stream added, broadcasting: 5 I0202 23:04:58.464466 7 log.go:181] (0xc000e28dc0) Reply frame received for 5 I0202 23:04:58.547643 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547677 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547692 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547710 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547728 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547749 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547762 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547774 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547792 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547803 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547815 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547829 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547838 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547846 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547856 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547865 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547874 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547891 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547903 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547916 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547933 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.547945 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.547957 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.547972 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.548032 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.548057 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.548089 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.548193 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.548205 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.548214 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.548224 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.548237 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.548254 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.548264 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.548274 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.548285 7 log.go:181] (0xc0074baaa0) (5) Data frame sent I0202 23:04:58.548307 7 log.go:181] (0xc000e28dc0) Data frame received for 3 I0202 23:04:58.548325 7 log.go:181] (0xc001edc640) (3) Data frame handling I0202 23:04:58.548376 7 log.go:181] (0xc001edc640) (3) Data frame sent I0202 23:04:58.549484 7 log.go:181] (0xc000e28dc0) Data frame received for 3 I0202 23:04:58.549505 7 log.go:181] (0xc001edc640) (3) Data frame handling I0202 23:04:58.549527 7 log.go:181] (0xc000e28dc0) Data frame received for 5 I0202 23:04:58.549535 7 log.go:181] (0xc0074baaa0) (5) Data frame handling I0202 23:04:58.552218 7 log.go:181] (0xc000e28dc0) Data frame received for 1 I0202 23:04:58.552250 7 log.go:181] (0xc002418780) (1) Data frame handling I0202 23:04:58.552290 7 log.go:181] (0xc002418780) (1) Data frame sent I0202 23:04:58.552311 7 log.go:181] (0xc000e28dc0) (0xc002418780) Stream removed, broadcasting: 1 I0202 23:04:58.552381 7 log.go:181] (0xc000e28dc0) Go away received I0202 23:04:58.552433 7 log.go:181] (0xc000e28dc0) (0xc002418780) Stream removed, broadcasting: 1 I0202 23:04:58.552469 7 log.go:181] (0xc000e28dc0) (0xc001edc640) Stream removed, broadcasting: 3 I0202 23:04:58.552483 7 log.go:181] (0xc000e28dc0) (0xc0074baaa0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Feb 2 23:04:58.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:58.552: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:58.578635 7 log.go:181] (0xc005d4ca50) (0xc0074bac80) Create stream I0202 23:04:58.578674 7 log.go:181] (0xc005d4ca50) (0xc0074bac80) Stream added, broadcasting: 1 I0202 23:04:58.586063 7 log.go:181] (0xc005d4ca50) Reply frame received for 1 I0202 23:04:58.586117 7 log.go:181] (0xc005d4ca50) (0xc0074bad20) Create stream I0202 23:04:58.586139 7 log.go:181] (0xc005d4ca50) (0xc0074bad20) Stream added, broadcasting: 3 I0202 23:04:58.588066 7 log.go:181] (0xc005d4ca50) Reply frame received for 3 I0202 23:04:58.588106 7 log.go:181] (0xc005d4ca50) (0xc0024188c0) Create stream I0202 23:04:58.588124 7 log.go:181] (0xc005d4ca50) (0xc0024188c0) Stream added, broadcasting: 5 I0202 23:04:58.589158 7 log.go:181] (0xc005d4ca50) Reply frame received for 5 I0202 23:04:58.652718 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.652762 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.652792 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.652824 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.652970 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653003 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653026 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.653035 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653062 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653084 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.653101 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653135 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653154 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.653164 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653177 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653195 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.653212 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653221 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653279 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.653310 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653326 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653340 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.653352 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.653372 7 log.go:181] (0xc0024188c0) (5) Data frame sent I0202 23:04:58.653568 7 log.go:181] (0xc005d4ca50) Data frame received for 3 I0202 23:04:58.653597 7 log.go:181] (0xc0074bad20) (3) Data frame handling I0202 23:04:58.653627 7 log.go:181] (0xc0074bad20) (3) Data frame sent I0202 23:04:58.654145 7 log.go:181] (0xc005d4ca50) Data frame received for 5 I0202 23:04:58.654165 7 log.go:181] (0xc0024188c0) (5) Data frame handling I0202 23:04:58.654268 7 log.go:181] (0xc005d4ca50) Data frame received for 3 I0202 23:04:58.654295 7 log.go:181] (0xc0074bad20) (3) Data frame handling I0202 23:04:58.655543 7 log.go:181] (0xc005d4ca50) Data frame received for 1 I0202 23:04:58.655561 7 log.go:181] (0xc0074bac80) (1) Data frame handling I0202 23:04:58.655576 7 log.go:181] (0xc0074bac80) (1) Data frame sent I0202 23:04:58.655587 7 log.go:181] (0xc005d4ca50) (0xc0074bac80) Stream removed, broadcasting: 1 I0202 23:04:58.655669 7 log.go:181] (0xc005d4ca50) (0xc0074bac80) Stream removed, broadcasting: 1 I0202 23:04:58.655689 7 log.go:181] (0xc005d4ca50) (0xc0074bad20) Stream removed, broadcasting: 3 I0202 23:04:58.655699 7 log.go:181] (0xc005d4ca50) (0xc0024188c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Feb 2 23:04:58.655: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:04:58.655: INFO: >>> kubeConfig: /root/.kube/config I0202 23:04:58.656425 7 log.go:181] (0xc005d4ca50) Go away received I0202 23:04:58.683680 7 log.go:181] (0xc0008176b0) (0xc001998780) Create stream I0202 23:04:58.683717 7 log.go:181] (0xc0008176b0) (0xc001998780) Stream added, broadcasting: 1 I0202 23:04:58.685912 7 log.go:181] (0xc0008176b0) Reply frame received for 1 I0202 23:04:58.685976 7 log.go:181] (0xc0008176b0) (0xc003f18820) Create stream I0202 23:04:58.686001 7 log.go:181] (0xc0008176b0) (0xc003f18820) Stream added, broadcasting: 3 I0202 23:04:58.686943 7 log.go:181] (0xc0008176b0) Reply frame received for 3 I0202 23:04:58.686972 7 log.go:181] (0xc0008176b0) (0xc001edc6e0) Create stream I0202 23:04:58.686980 7 log.go:181] (0xc0008176b0) (0xc001edc6e0) Stream added, broadcasting: 5 I0202 23:04:58.687833 7 log.go:181] (0xc0008176b0) Reply frame received for 5 I0202 23:05:03.745170 7 log.go:181] (0xc0008176b0) Data frame received for 5 I0202 23:05:03.745212 7 log.go:181] (0xc001edc6e0) (5) Data frame handling I0202 23:05:03.745235 7 log.go:181] (0xc001edc6e0) (5) Data frame sent I0202 23:05:03.745550 7 log.go:181] (0xc0008176b0) Data frame received for 3 I0202 23:05:03.745571 7 log.go:181] (0xc003f18820) (3) Data frame handling I0202 23:05:03.745717 7 log.go:181] (0xc0008176b0) Data frame received for 5 I0202 23:05:03.745737 7 log.go:181] (0xc001edc6e0) (5) Data frame handling I0202 23:05:03.747277 7 log.go:181] (0xc0008176b0) Data frame received for 1 I0202 23:05:03.747294 7 log.go:181] (0xc001998780) (1) Data frame handling I0202 23:05:03.747308 7 log.go:181] (0xc001998780) (1) Data frame sent I0202 23:05:03.747416 7 log.go:181] (0xc0008176b0) (0xc001998780) Stream removed, broadcasting: 1 I0202 23:05:03.747471 7 log.go:181] (0xc0008176b0) Go away received I0202 23:05:03.747632 7 log.go:181] (0xc0008176b0) (0xc001998780) Stream removed, broadcasting: 1 I0202 23:05:03.747664 7 log.go:181] (0xc0008176b0) (0xc003f18820) Stream removed, broadcasting: 3 I0202 23:05:03.747685 7 log.go:181] (0xc0008176b0) (0xc001edc6e0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 2 23:05:03.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:05:03.747: INFO: >>> kubeConfig: /root/.kube/config I0202 23:05:03.799582 7 log.go:181] (0xc005d4d1e0) (0xc0074bafa0) Create stream I0202 23:05:03.799633 7 log.go:181] (0xc005d4d1e0) (0xc0074bafa0) Stream added, broadcasting: 1 I0202 23:05:03.801791 7 log.go:181] (0xc005d4d1e0) Reply frame received for 1 I0202 23:05:03.801826 7 log.go:181] (0xc005d4d1e0) (0xc002418960) Create stream I0202 23:05:03.801837 7 log.go:181] (0xc005d4d1e0) (0xc002418960) Stream added, broadcasting: 3 I0202 23:05:03.802694 7 log.go:181] (0xc005d4d1e0) Reply frame received for 3 I0202 23:05:03.802735 7 log.go:181] (0xc005d4d1e0) (0xc003f188c0) Create stream I0202 23:05:03.802751 7 log.go:181] (0xc005d4d1e0) (0xc003f188c0) Stream added, broadcasting: 5 I0202 23:05:03.803636 7 log.go:181] (0xc005d4d1e0) Reply frame received for 5 I0202 23:05:03.900084 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900115 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900128 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.900146 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900154 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900180 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.900191 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900201 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900211 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.900222 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900231 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900249 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.900259 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900268 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900278 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.900288 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900297 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900309 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.900318 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.900327 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.900337 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.901370 7 log.go:181] (0xc005d4d1e0) Data frame received for 3 I0202 23:05:03.901422 7 log.go:181] (0xc002418960) (3) Data frame handling I0202 23:05:03.901445 7 log.go:181] (0xc002418960) (3) Data frame sent I0202 23:05:03.901471 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.901492 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.901522 7 log.go:181] (0xc003f188c0) (5) Data frame sent I0202 23:05:03.901540 7 log.go:181] (0xc005d4d1e0) Data frame received for 5 I0202 23:05:03.901560 7 log.go:181] (0xc003f188c0) (5) Data frame handling I0202 23:05:03.901586 7 log.go:181] (0xc005d4d1e0) Data frame received for 3 I0202 23:05:03.901600 7 log.go:181] (0xc002418960) (3) Data frame handling I0202 23:05:03.903142 7 log.go:181] (0xc005d4d1e0) Data frame received for 1 I0202 23:05:03.903170 7 log.go:181] (0xc0074bafa0) (1) Data frame handling I0202 23:05:03.903185 7 log.go:181] (0xc0074bafa0) (1) Data frame sent I0202 23:05:03.903200 7 log.go:181] (0xc005d4d1e0) (0xc0074bafa0) Stream removed, broadcasting: 1 I0202 23:05:03.903262 7 log.go:181] (0xc005d4d1e0) Go away received I0202 23:05:03.903301 7 log.go:181] (0xc005d4d1e0) (0xc0074bafa0) Stream removed, broadcasting: 1 I0202 23:05:03.903317 7 log.go:181] (0xc005d4d1e0) (0xc002418960) Stream removed, broadcasting: 3 I0202 23:05:03.903331 7 log.go:181] (0xc005d4d1e0) (0xc003f188c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Feb 2 23:05:03.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:05:03.903: INFO: >>> kubeConfig: /root/.kube/config I0202 23:05:03.930872 7 log.go:181] (0xc000e29760) (0xc002418d20) Create stream I0202 23:05:03.930907 7 log.go:181] (0xc000e29760) (0xc002418d20) Stream added, broadcasting: 1 I0202 23:05:03.932985 7 log.go:181] (0xc000e29760) Reply frame received for 1 I0202 23:05:03.933021 7 log.go:181] (0xc000e29760) (0xc0074bb040) Create stream I0202 23:05:03.933032 7 log.go:181] (0xc000e29760) (0xc0074bb040) Stream added, broadcasting: 3 I0202 23:05:03.933979 7 log.go:181] (0xc000e29760) Reply frame received for 3 I0202 23:05:03.934016 7 log.go:181] (0xc000e29760) (0xc003f18960) Create stream I0202 23:05:03.934030 7 log.go:181] (0xc000e29760) (0xc003f18960) Stream added, broadcasting: 5 I0202 23:05:03.935063 7 log.go:181] (0xc000e29760) Reply frame received for 5 I0202 23:05:04.001205 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.001259 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.001288 7 log.go:181] (0xc003f18960) (5) Data frame sent I0202 23:05:04.001316 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.001341 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.001374 7 log.go:181] (0xc003f18960) (5) Data frame sent I0202 23:05:04.001397 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.001411 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.001436 7 log.go:181] (0xc003f18960) (5) Data frame sent I0202 23:05:04.001458 7 log.go:181] (0xc000e29760) Data frame received for 3 I0202 23:05:04.001496 7 log.go:181] (0xc0074bb040) (3) Data frame handling I0202 23:05:04.001514 7 log.go:181] (0xc0074bb040) (3) Data frame sent I0202 23:05:04.001547 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.001579 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.001609 7 log.go:181] (0xc003f18960) (5) Data frame sent I0202 23:05:04.001639 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.001675 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.001718 7 log.go:181] (0xc003f18960) (5) Data frame sent I0202 23:05:04.001732 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.001740 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.001754 7 log.go:181] (0xc003f18960) (5) Data frame sent I0202 23:05:04.002258 7 log.go:181] (0xc000e29760) Data frame received for 5 I0202 23:05:04.002283 7 log.go:181] (0xc003f18960) (5) Data frame handling I0202 23:05:04.002319 7 log.go:181] (0xc000e29760) Data frame received for 3 I0202 23:05:04.002366 7 log.go:181] (0xc0074bb040) (3) Data frame handling I0202 23:05:04.004511 7 log.go:181] (0xc000e29760) Data frame received for 1 I0202 23:05:04.004536 7 log.go:181] (0xc002418d20) (1) Data frame handling I0202 23:05:04.004547 7 log.go:181] (0xc002418d20) (1) Data frame sent I0202 23:05:04.004563 7 log.go:181] (0xc000e29760) (0xc002418d20) Stream removed, broadcasting: 1 I0202 23:05:04.004587 7 log.go:181] (0xc000e29760) Go away received I0202 23:05:04.004707 7 log.go:181] (0xc000e29760) (0xc002418d20) Stream removed, broadcasting: 1 I0202 23:05:04.004736 7 log.go:181] (0xc000e29760) (0xc0074bb040) Stream removed, broadcasting: 3 I0202 23:05:04.004747 7 log.go:181] (0xc000e29760) (0xc003f18960) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Feb 2 23:05:04.004: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:05:04.004: INFO: >>> kubeConfig: /root/.kube/config I0202 23:05:04.035905 7 log.go:181] (0xc004450dc0) (0xc001edce60) Create stream I0202 23:05:04.035938 7 log.go:181] (0xc004450dc0) (0xc001edce60) Stream added, broadcasting: 1 I0202 23:05:04.038092 7 log.go:181] (0xc004450dc0) Reply frame received for 1 I0202 23:05:04.038137 7 log.go:181] (0xc004450dc0) (0xc001edd040) Create stream I0202 23:05:04.038167 7 log.go:181] (0xc004450dc0) (0xc001edd040) Stream added, broadcasting: 3 I0202 23:05:04.039098 7 log.go:181] (0xc004450dc0) Reply frame received for 3 I0202 23:05:04.039137 7 log.go:181] (0xc004450dc0) (0xc002418dc0) Create stream I0202 23:05:04.039150 7 log.go:181] (0xc004450dc0) (0xc002418dc0) Stream added, broadcasting: 5 I0202 23:05:04.040083 7 log.go:181] (0xc004450dc0) Reply frame received for 5 I0202 23:05:09.118960 7 log.go:181] (0xc004450dc0) Data frame received for 5 I0202 23:05:09.118985 7 log.go:181] (0xc002418dc0) (5) Data frame handling I0202 23:05:09.118996 7 log.go:181] (0xc002418dc0) (5) Data frame sent I0202 23:05:09.119010 7 log.go:181] (0xc004450dc0) Data frame received for 5 I0202 23:05:09.119016 7 log.go:181] (0xc002418dc0) (5) Data frame handling I0202 23:05:09.119174 7 log.go:181] (0xc004450dc0) Data frame received for 3 I0202 23:05:09.119192 7 log.go:181] (0xc001edd040) (3) Data frame handling I0202 23:05:09.121066 7 log.go:181] (0xc004450dc0) Data frame received for 1 I0202 23:05:09.121160 7 log.go:181] (0xc001edce60) (1) Data frame handling I0202 23:05:09.121200 7 log.go:181] (0xc001edce60) (1) Data frame sent I0202 23:05:09.121221 7 log.go:181] (0xc004450dc0) (0xc001edce60) Stream removed, broadcasting: 1 I0202 23:05:09.121244 7 log.go:181] (0xc004450dc0) Go away received I0202 23:05:09.121336 7 log.go:181] (0xc004450dc0) (0xc001edce60) Stream removed, broadcasting: 1 I0202 23:05:09.121383 7 log.go:181] (0xc004450dc0) (0xc001edd040) Stream removed, broadcasting: 3 I0202 23:05:09.121394 7 log.go:181] (0xc004450dc0) (0xc002418dc0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 2 23:05:09.121: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:05:09.121: INFO: >>> kubeConfig: /root/.kube/config I0202 23:05:09.150630 7 log.go:181] (0xc0044514a0) (0xc001edd9a0) Create stream I0202 23:05:09.150676 7 log.go:181] (0xc0044514a0) (0xc001edd9a0) Stream added, broadcasting: 1 I0202 23:05:09.155932 7 log.go:181] (0xc0044514a0) Reply frame received for 1 I0202 23:05:09.155988 7 log.go:181] (0xc0044514a0) (0xc0074bb180) Create stream I0202 23:05:09.156012 7 log.go:181] (0xc0044514a0) (0xc0074bb180) Stream added, broadcasting: 3 I0202 23:05:09.158079 7 log.go:181] (0xc0044514a0) Reply frame received for 3 I0202 23:05:09.158126 7 log.go:181] (0xc0044514a0) (0xc003f18a00) Create stream I0202 23:05:09.158140 7 log.go:181] (0xc0044514a0) (0xc003f18a00) Stream added, broadcasting: 5 I0202 23:05:09.159179 7 log.go:181] (0xc0044514a0) Reply frame received for 5 I0202 23:05:09.255937 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.255989 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256017 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256041 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256054 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256135 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256178 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256198 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256219 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256245 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256266 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256278 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256289 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256302 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256316 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256329 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256342 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256359 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256376 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256393 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256424 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256438 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256448 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256471 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.256484 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.256499 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.256517 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.257104 7 log.go:181] (0xc0044514a0) Data frame received for 3 I0202 23:05:09.257156 7 log.go:181] (0xc0074bb180) (3) Data frame handling I0202 23:05:09.257187 7 log.go:181] (0xc0074bb180) (3) Data frame sent I0202 23:05:09.257228 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.257243 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.257261 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.257732 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.257763 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.257783 7 log.go:181] (0xc003f18a00) (5) Data frame sent I0202 23:05:09.261125 7 log.go:181] (0xc0044514a0) Data frame received for 3 I0202 23:05:09.261148 7 log.go:181] (0xc0074bb180) (3) Data frame handling I0202 23:05:09.261335 7 log.go:181] (0xc0044514a0) Data frame received for 5 I0202 23:05:09.261360 7 log.go:181] (0xc003f18a00) (5) Data frame handling I0202 23:05:09.262824 7 log.go:181] (0xc0044514a0) Data frame received for 1 I0202 23:05:09.262846 7 log.go:181] (0xc001edd9a0) (1) Data frame handling I0202 23:05:09.262865 7 log.go:181] (0xc001edd9a0) (1) Data frame sent I0202 23:05:09.262882 7 log.go:181] (0xc0044514a0) (0xc001edd9a0) Stream removed, broadcasting: 1 I0202 23:05:09.262908 7 log.go:181] (0xc0044514a0) Go away received I0202 23:05:09.263038 7 log.go:181] (0xc0044514a0) (0xc001edd9a0) Stream removed, broadcasting: 1 I0202 23:05:09.263056 7 log.go:181] (0xc0044514a0) (0xc0074bb180) Stream removed, broadcasting: 3 I0202 23:05:09.263071 7 log.go:181] (0xc0044514a0) (0xc003f18a00) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Feb 2 23:05:09.263: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:05:09.263: INFO: >>> kubeConfig: /root/.kube/config I0202 23:05:09.303048 7 log.go:181] (0xc005d4d8c0) (0xc0074bb400) Create stream I0202 23:05:09.303075 7 log.go:181] (0xc005d4d8c0) (0xc0074bb400) Stream added, broadcasting: 1 I0202 23:05:09.305788 7 log.go:181] (0xc005d4d8c0) Reply frame received for 1 I0202 23:05:09.305837 7 log.go:181] (0xc005d4d8c0) (0xc001998820) Create stream I0202 23:05:09.305857 7 log.go:181] (0xc005d4d8c0) (0xc001998820) Stream added, broadcasting: 3 I0202 23:05:09.306940 7 log.go:181] (0xc005d4d8c0) Reply frame received for 3 I0202 23:05:09.306978 7 log.go:181] (0xc005d4d8c0) (0xc001edda40) Create stream I0202 23:05:09.306992 7 log.go:181] (0xc005d4d8c0) (0xc001edda40) Stream added, broadcasting: 5 I0202 23:05:09.308049 7 log.go:181] (0xc005d4d8c0) Reply frame received for 5 I0202 23:05:09.379656 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.379689 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.379699 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.379705 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.379709 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.379815 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.379826 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.379857 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.379865 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.379870 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.379875 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.379946 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.379987 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380025 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380055 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380068 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380078 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380100 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380119 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380136 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380161 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380176 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380190 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380205 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380218 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380231 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380250 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380263 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380276 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380293 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380464 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380478 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380492 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380497 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.380519 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.380533 7 log.go:181] (0xc001edda40) (5) Data frame sent I0202 23:05:09.380548 7 log.go:181] (0xc005d4d8c0) Data frame received for 3 I0202 23:05:09.380553 7 log.go:181] (0xc001998820) (3) Data frame handling I0202 23:05:09.380564 7 log.go:181] (0xc001998820) (3) Data frame sent I0202 23:05:09.381276 7 log.go:181] (0xc005d4d8c0) Data frame received for 3 I0202 23:05:09.381356 7 log.go:181] (0xc001998820) (3) Data frame handling I0202 23:05:09.381394 7 log.go:181] (0xc005d4d8c0) Data frame received for 5 I0202 23:05:09.381436 7 log.go:181] (0xc001edda40) (5) Data frame handling I0202 23:05:09.383256 7 log.go:181] (0xc005d4d8c0) Data frame received for 1 I0202 23:05:09.383276 7 log.go:181] (0xc0074bb400) (1) Data frame handling I0202 23:05:09.383295 7 log.go:181] (0xc0074bb400) (1) Data frame sent I0202 23:05:09.383315 7 log.go:181] (0xc005d4d8c0) (0xc0074bb400) Stream removed, broadcasting: 1 I0202 23:05:09.383393 7 log.go:181] (0xc005d4d8c0) Go away received I0202 23:05:09.383440 7 log.go:181] (0xc005d4d8c0) (0xc0074bb400) Stream removed, broadcasting: 1 I0202 23:05:09.383462 7 log.go:181] (0xc005d4d8c0) (0xc001998820) Stream removed, broadcasting: 3 I0202 23:05:09.383477 7 log.go:181] (0xc005d4d8c0) (0xc001edda40) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Feb 2 23:05:09.383: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-3805 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:05:09.383: INFO: >>> kubeConfig: /root/.kube/config I0202 23:05:09.414575 7 log.go:181] (0xc000817ce0) (0xc001998a00) Create stream I0202 23:05:09.414607 7 log.go:181] (0xc000817ce0) (0xc001998a00) Stream added, broadcasting: 1 I0202 23:05:09.416805 7 log.go:181] (0xc000817ce0) Reply frame received for 1 I0202 23:05:09.416960 7 log.go:181] (0xc000817ce0) (0xc001eddae0) Create stream I0202 23:05:09.416984 7 log.go:181] (0xc000817ce0) (0xc001eddae0) Stream added, broadcasting: 3 I0202 23:05:09.417995 7 log.go:181] (0xc000817ce0) Reply frame received for 3 I0202 23:05:09.418038 7 log.go:181] (0xc000817ce0) (0xc0074bb5e0) Create stream I0202 23:05:09.418054 7 log.go:181] (0xc000817ce0) (0xc0074bb5e0) Stream added, broadcasting: 5 I0202 23:05:09.418962 7 log.go:181] (0xc000817ce0) Reply frame received for 5 I0202 23:05:14.493977 7 log.go:181] (0xc000817ce0) Data frame received for 5 I0202 23:05:14.494033 7 log.go:181] (0xc0074bb5e0) (5) Data frame handling I0202 23:05:14.494083 7 log.go:181] (0xc0074bb5e0) (5) Data frame sent I0202 23:05:14.494157 7 log.go:181] (0xc000817ce0) Data frame received for 3 I0202 23:05:14.494239 7 log.go:181] (0xc001eddae0) (3) Data frame handling I0202 23:05:14.494428 7 log.go:181] (0xc000817ce0) Data frame received for 5 I0202 23:05:14.494475 7 log.go:181] (0xc0074bb5e0) (5) Data frame handling I0202 23:05:14.496279 7 log.go:181] (0xc000817ce0) Data frame received for 1 I0202 23:05:14.496306 7 log.go:181] (0xc001998a00) (1) Data frame handling I0202 23:05:14.496327 7 log.go:181] (0xc001998a00) (1) Data frame sent I0202 23:05:14.496358 7 log.go:181] (0xc000817ce0) (0xc001998a00) Stream removed, broadcasting: 1 I0202 23:05:14.496389 7 log.go:181] (0xc000817ce0) Go away received I0202 23:05:14.496500 7 log.go:181] (0xc000817ce0) (0xc001998a00) Stream removed, broadcasting: 1 I0202 23:05:14.496526 7 log.go:181] (0xc000817ce0) (0xc001eddae0) Stream removed, broadcasting: 3 I0202 23:05:14.496536 7 log.go:181] (0xc000817ce0) (0xc0074bb5e0) Stream removed, broadcasting: 5 STEP: removing the label kubernetes.io/e2e-67723f37-7b58-4263-a89d-e3f360d934ac off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-67723f37-7b58-4263-a89d-e3f360d934ac [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:05:14.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3805" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:47.245 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":309,"completed":93,"skipped":1537,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:05:14.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating all guestbook components Feb 2 23:05:14.627: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Feb 2 23:05:14.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 create -f -' Feb 2 23:05:15.073: INFO: stderr: "" Feb 2 23:05:15.073: INFO: stdout: "service/agnhost-replica created\n" Feb 2 23:05:15.073: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Feb 2 23:05:15.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 create -f -' Feb 2 23:05:15.376: INFO: stderr: "" Feb 2 23:05:15.376: INFO: stdout: "service/agnhost-primary created\n" Feb 2 23:05:15.376: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 2 23:05:15.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 create -f -' Feb 2 23:05:15.724: INFO: stderr: "" Feb 2 23:05:15.724: INFO: stdout: "service/frontend created\n" Feb 2 23:05:15.724: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 2 23:05:15.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 create -f -' Feb 2 23:05:16.216: INFO: stderr: "" Feb 2 23:05:16.216: INFO: stdout: "deployment.apps/frontend created\n" Feb 2 23:05:16.216: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 2 23:05:16.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 create -f -' Feb 2 23:05:16.567: INFO: stderr: "" Feb 2 23:05:16.567: INFO: stdout: "deployment.apps/agnhost-primary created\n" Feb 2 23:05:16.567: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 2 23:05:16.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 create -f -' Feb 2 23:05:16.987: INFO: stderr: "" Feb 2 23:05:16.987: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Feb 2 23:05:16.987: INFO: Waiting for all frontend pods to be Running. Feb 2 23:05:27.038: INFO: Waiting for frontend to serve content. Feb 2 23:05:28.735: INFO: Trying to add a new entry to the guestbook. Feb 2 23:05:28.745: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 2 23:05:28.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 delete --grace-period=0 --force -f -' Feb 2 23:05:28.904: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:05:28.904: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Feb 2 23:05:28.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 delete --grace-period=0 --force -f -' Feb 2 23:05:29.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:05:29.241: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Feb 2 23:05:29.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 delete --grace-period=0 --force -f -' Feb 2 23:05:29.393: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:05:29.393: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 2 23:05:29.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 delete --grace-period=0 --force -f -' Feb 2 23:05:29.499: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:05:29.499: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 2 23:05:29.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 delete --grace-period=0 --force -f -' Feb 2 23:05:29.619: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:05:29.619: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Feb 2 23:05:29.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7991 delete --grace-period=0 --force -f -' Feb 2 23:05:30.036: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:05:30.036: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:05:30.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7991" for this suite. • [SLOW TEST:15.577 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":309,"completed":94,"skipped":1545,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:05:30.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override command Feb 2 23:05:30.549: INFO: Waiting up to 5m0s for pod "client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906" in namespace "containers-8293" to be "Succeeded or Failed" Feb 2 23:05:30.567: INFO: Pod "client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906": Phase="Pending", Reason="", readiness=false. Elapsed: 18.289948ms Feb 2 23:05:32.852: INFO: Pod "client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303092806s Feb 2 23:05:34.886: INFO: Pod "client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337417689s Feb 2 23:05:36.912: INFO: Pod "client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.36284484s STEP: Saw pod success Feb 2 23:05:36.912: INFO: Pod "client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906" satisfied condition "Succeeded or Failed" Feb 2 23:05:36.915: INFO: Trying to get logs from node leguer-worker pod client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906 container agnhost-container: STEP: delete the pod Feb 2 23:05:36.935: INFO: Waiting for pod client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906 to disappear Feb 2 23:05:36.939: INFO: Pod client-containers-c52cf3c6-05c7-4d48-ad58-3714fa729906 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:05:36.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8293" for this suite. • [SLOW TEST:6.803 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":309,"completed":95,"skipped":1548,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:05:36.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Feb 2 23:05:37.054: INFO: Waiting up to 5m0s for pod "downward-api-771ce883-d708-4fe9-a016-505289d8de75" in namespace "downward-api-6943" to be "Succeeded or Failed" Feb 2 23:05:37.138: INFO: Pod "downward-api-771ce883-d708-4fe9-a016-505289d8de75": Phase="Pending", Reason="", readiness=false. Elapsed: 84.366343ms Feb 2 23:05:39.142: INFO: Pod "downward-api-771ce883-d708-4fe9-a016-505289d8de75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088540624s Feb 2 23:05:41.147: INFO: Pod "downward-api-771ce883-d708-4fe9-a016-505289d8de75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093598119s STEP: Saw pod success Feb 2 23:05:41.147: INFO: Pod "downward-api-771ce883-d708-4fe9-a016-505289d8de75" satisfied condition "Succeeded or Failed" Feb 2 23:05:41.151: INFO: Trying to get logs from node leguer-worker pod downward-api-771ce883-d708-4fe9-a016-505289d8de75 container dapi-container: STEP: delete the pod Feb 2 23:05:41.179: INFO: Waiting for pod downward-api-771ce883-d708-4fe9-a016-505289d8de75 to disappear Feb 2 23:05:41.210: INFO: Pod downward-api-771ce883-d708-4fe9-a016-505289d8de75 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:05:41.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6943" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":309,"completed":96,"skipped":1554,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:05:41.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:05:41.646: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 2 23:05:46.650: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 2 23:05:46.650: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 2 23:05:46.729: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9502 b29be659-f661-4ead-9cb2-da27a7d5b9ef 4177111 1 2021-02-02 23:05:46 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-02-02 23:05:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0020c9a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 2 23:05:46.743: INFO: New ReplicaSet "test-cleanup-deployment-685c4f8568" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-685c4f8568 deployment-9502 728b0267-db6e-4847-8e32-aebffc844a16 4177113 1 2021-02-02 23:05:46 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment b29be659-f661-4ead-9cb2-da27a7d5b9ef 0xc002e58c27 0xc002e58c28}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:05:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b29be659-f661-4ead-9cb2-da27a7d5b9ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 685c4f8568,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e58cb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:05:46.743: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 2 23:05:46.743: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9502 07552848-61d4-4bf3-be20-5ac81891064a 4177112 1 2021-02-02 23:05:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment b29be659-f661-4ead-9cb2-da27a7d5b9ef 0xc002e58b17 0xc002e58b18}] [] [{e2e.test Update apps/v1 2021-02-02 23:05:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-02 23:05:46 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"b29be659-f661-4ead-9cb2-da27a7d5b9ef\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e58bb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:05:46.809: INFO: Pod "test-cleanup-controller-8zrgt" is available: &Pod{ObjectMeta:{test-cleanup-controller-8zrgt test-cleanup-controller- deployment-9502 249dce3a-0120-4bde-888a-8b2056e9949b 4177095 0 2021-02-02 23:05:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 07552848-61d4-4bf3-be20-5ac81891064a 0xc002e59377 0xc002e59378}] [] [{kube-controller-manager Update v1 2021-02-02 23:05:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07552848-61d4-4bf3-be20-5ac81891064a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:05:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m46gl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m46gl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m46gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:05:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:05:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:05:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.16,StartTime:2021-02-02 23:05:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:05:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca33cf265c1bba0418fc61e2aeaf77c2146ca726164e16554542dbbe782b7a0e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:05:46.809: INFO: Pod "test-cleanup-deployment-685c4f8568-5npbl" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-5npbl test-cleanup-deployment-685c4f8568- deployment-9502 5778fec4-7616-4fb0-8f5f-266b9d65e18a 4177119 0 2021-02-02 23:05:46 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 728b0267-db6e-4847-8e32-aebffc844a16 0xc002e596d7 0xc002e596d8}] [] [{kube-controller-manager Update v1 2021-02-02 23:05:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"728b0267-db6e-4847-8e32-aebffc844a16\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m46gl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m46gl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m46gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:05:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:05:46.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9502" for this suite. • [SLOW TEST:5.649 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":309,"completed":97,"skipped":1562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:05:46.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-2e85c8a1-fbf2-4ae0-830e-74965baa786b STEP: Creating secret with name s-test-opt-upd-3d920200-ebc5-4f2b-8579-8f45be092d79 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2e85c8a1-fbf2-4ae0-830e-74965baa786b STEP: Updating secret s-test-opt-upd-3d920200-ebc5-4f2b-8579-8f45be092d79 STEP: Creating secret with name s-test-opt-create-6e0dd303-ebfc-4768-b683-fd54b2409cb5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:05:57.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8554" for this suite. • [SLOW TEST:10.224 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":98,"skipped":1618,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:05:57.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service nodeport-test with type=NodePort in namespace services-3786 STEP: creating replication controller nodeport-test in namespace services-3786 I0202 23:05:57.366815 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3786, replica count: 2 I0202 23:06:00.417202 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:06:03.417472 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:06:03.417: INFO: Creating new exec pod Feb 2 23:06:08.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3786 exec execpod5ssct -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 2 23:06:08.811: INFO: stderr: "I0202 23:06:08.702700 983 log.go:181] (0xc00012e000) (0xc0005c4000) Create stream\nI0202 23:06:08.702896 983 log.go:181] (0xc00012e000) (0xc0005c4000) Stream added, broadcasting: 1\nI0202 23:06:08.705687 983 log.go:181] (0xc00012e000) Reply frame received for 1\nI0202 23:06:08.705729 983 log.go:181] (0xc00012e000) (0xc0009fe140) Create stream\nI0202 23:06:08.705741 983 log.go:181] (0xc00012e000) (0xc0009fe140) Stream added, broadcasting: 3\nI0202 23:06:08.706750 983 log.go:181] (0xc00012e000) Reply frame received for 3\nI0202 23:06:08.706792 983 log.go:181] (0xc00012e000) (0xc000aa2000) Create stream\nI0202 23:06:08.706803 983 log.go:181] (0xc00012e000) (0xc000aa2000) Stream added, broadcasting: 5\nI0202 23:06:08.707806 983 log.go:181] (0xc00012e000) Reply frame received for 5\nI0202 23:06:08.802707 983 log.go:181] (0xc00012e000) Data frame received for 5\nI0202 23:06:08.802751 983 log.go:181] (0xc000aa2000) (5) Data frame handling\nI0202 23:06:08.802773 983 log.go:181] (0xc000aa2000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0202 23:06:08.803427 983 log.go:181] (0xc00012e000) Data frame received for 5\nI0202 23:06:08.803459 983 log.go:181] (0xc000aa2000) (5) Data frame handling\nI0202 23:06:08.803477 983 log.go:181] (0xc000aa2000) (5) Data frame sent\nI0202 23:06:08.803492 983 log.go:181] (0xc00012e000) Data frame received for 5\nI0202 23:06:08.803503 983 log.go:181] (0xc000aa2000) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0202 23:06:08.803540 983 log.go:181] (0xc00012e000) Data frame received for 3\nI0202 23:06:08.803577 983 log.go:181] (0xc0009fe140) (3) Data frame handling\nI0202 23:06:08.805229 983 log.go:181] (0xc00012e000) Data frame received for 1\nI0202 23:06:08.805408 983 log.go:181] (0xc0005c4000) (1) Data frame handling\nI0202 23:06:08.805436 983 log.go:181] (0xc0005c4000) (1) Data frame sent\nI0202 23:06:08.805456 983 log.go:181] (0xc00012e000) (0xc0005c4000) Stream removed, broadcasting: 1\nI0202 23:06:08.805474 983 log.go:181] (0xc00012e000) Go away received\nI0202 23:06:08.805934 983 log.go:181] (0xc00012e000) (0xc0005c4000) Stream removed, broadcasting: 1\nI0202 23:06:08.805953 983 log.go:181] (0xc00012e000) (0xc0009fe140) Stream removed, broadcasting: 3\nI0202 23:06:08.805963 983 log.go:181] (0xc00012e000) (0xc000aa2000) Stream removed, broadcasting: 5\n" Feb 2 23:06:08.811: INFO: stdout: "" Feb 2 23:06:08.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3786 exec execpod5ssct -- /bin/sh -x -c nc -zv -t -w 2 10.96.96.111 80' Feb 2 23:06:09.015: INFO: stderr: "I0202 23:06:08.938642 999 log.go:181] (0xc000280210) (0xc000b12000) Create stream\nI0202 23:06:08.938700 999 log.go:181] (0xc000280210) (0xc000b12000) Stream added, broadcasting: 1\nI0202 23:06:08.940640 999 log.go:181] (0xc000280210) Reply frame received for 1\nI0202 23:06:08.940681 999 log.go:181] (0xc000280210) (0xc000b120a0) Create stream\nI0202 23:06:08.940694 999 log.go:181] (0xc000280210) (0xc000b120a0) Stream added, broadcasting: 3\nI0202 23:06:08.941788 999 log.go:181] (0xc000280210) Reply frame received for 3\nI0202 23:06:08.941838 999 log.go:181] (0xc000280210) (0xc000622640) Create stream\nI0202 23:06:08.941878 999 log.go:181] (0xc000280210) (0xc000622640) Stream added, broadcasting: 5\nI0202 23:06:08.942796 999 log.go:181] (0xc000280210) Reply frame received for 5\nI0202 23:06:09.009695 999 log.go:181] (0xc000280210) Data frame received for 5\nI0202 23:06:09.009741 999 log.go:181] (0xc000622640) (5) Data frame handling\nI0202 23:06:09.009758 999 log.go:181] (0xc000622640) (5) Data frame sent\nI0202 23:06:09.009767 999 log.go:181] (0xc000280210) Data frame received for 5\nI0202 23:06:09.009773 999 log.go:181] (0xc000622640) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.96.111 80\nConnection to 10.96.96.111 80 port [tcp/http] succeeded!\nI0202 23:06:09.009815 999 log.go:181] (0xc000280210) Data frame received for 3\nI0202 23:06:09.009831 999 log.go:181] (0xc000b120a0) (3) Data frame handling\nI0202 23:06:09.010802 999 log.go:181] (0xc000280210) Data frame received for 1\nI0202 23:06:09.010830 999 log.go:181] (0xc000b12000) (1) Data frame handling\nI0202 23:06:09.010842 999 log.go:181] (0xc000b12000) (1) Data frame sent\nI0202 23:06:09.010856 999 log.go:181] (0xc000280210) (0xc000b12000) Stream removed, broadcasting: 1\nI0202 23:06:09.010868 999 log.go:181] (0xc000280210) Go away received\nI0202 23:06:09.011245 999 log.go:181] (0xc000280210) (0xc000b12000) Stream removed, broadcasting: 1\nI0202 23:06:09.011262 999 log.go:181] (0xc000280210) (0xc000b120a0) Stream removed, broadcasting: 3\nI0202 23:06:09.011269 999 log.go:181] (0xc000280210) (0xc000622640) Stream removed, broadcasting: 5\n" Feb 2 23:06:09.015: INFO: stdout: "" Feb 2 23:06:09.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3786 exec execpod5ssct -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30400' Feb 2 23:06:09.229: INFO: stderr: "I0202 23:06:09.143499 1013 log.go:181] (0xc0006326e0) (0xc0005fc1e0) Create stream\nI0202 23:06:09.143616 1013 log.go:181] (0xc0006326e0) (0xc0005fc1e0) Stream added, broadcasting: 1\nI0202 23:06:09.149810 1013 log.go:181] (0xc0006326e0) Reply frame received for 1\nI0202 23:06:09.149876 1013 log.go:181] (0xc0006326e0) (0xc000a1a000) Create stream\nI0202 23:06:09.149898 1013 log.go:181] (0xc0006326e0) (0xc000a1a000) Stream added, broadcasting: 3\nI0202 23:06:09.150909 1013 log.go:181] (0xc0006326e0) Reply frame received for 3\nI0202 23:06:09.150942 1013 log.go:181] (0xc0006326e0) (0xc000d96000) Create stream\nI0202 23:06:09.150951 1013 log.go:181] (0xc0006326e0) (0xc000d96000) Stream added, broadcasting: 5\nI0202 23:06:09.151706 1013 log.go:181] (0xc0006326e0) Reply frame received for 5\nI0202 23:06:09.221145 1013 log.go:181] (0xc0006326e0) Data frame received for 5\nI0202 23:06:09.221181 1013 log.go:181] (0xc000d96000) (5) Data frame handling\nI0202 23:06:09.221219 1013 log.go:181] (0xc000d96000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30400\nConnection to 172.18.0.13 30400 port [tcp/30400] succeeded!\nI0202 23:06:09.221251 1013 log.go:181] (0xc0006326e0) Data frame received for 3\nI0202 23:06:09.221264 1013 log.go:181] (0xc000a1a000) (3) Data frame handling\nI0202 23:06:09.221302 1013 log.go:181] (0xc0006326e0) Data frame received for 5\nI0202 23:06:09.221335 1013 log.go:181] (0xc000d96000) (5) Data frame handling\nI0202 23:06:09.222795 1013 log.go:181] (0xc0006326e0) Data frame received for 1\nI0202 23:06:09.222824 1013 log.go:181] (0xc0005fc1e0) (1) Data frame handling\nI0202 23:06:09.222842 1013 log.go:181] (0xc0005fc1e0) (1) Data frame sent\nI0202 23:06:09.222861 1013 log.go:181] (0xc0006326e0) (0xc0005fc1e0) Stream removed, broadcasting: 1\nI0202 23:06:09.222888 1013 log.go:181] (0xc0006326e0) Go away received\nI0202 23:06:09.223300 1013 log.go:181] (0xc0006326e0) (0xc0005fc1e0) Stream removed, broadcasting: 1\nI0202 23:06:09.223328 1013 log.go:181] (0xc0006326e0) (0xc000a1a000) Stream removed, broadcasting: 3\nI0202 23:06:09.223339 1013 log.go:181] (0xc0006326e0) (0xc000d96000) Stream removed, broadcasting: 5\n" Feb 2 23:06:09.229: INFO: stdout: "" Feb 2 23:06:09.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3786 exec execpod5ssct -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30400' Feb 2 23:06:09.452: INFO: stderr: "I0202 23:06:09.371508 1031 log.go:181] (0xc0007aa000) (0xc000ac0000) Create stream\nI0202 23:06:09.371570 1031 log.go:181] (0xc0007aa000) (0xc000ac0000) Stream added, broadcasting: 1\nI0202 23:06:09.376569 1031 log.go:181] (0xc0007aa000) Reply frame received for 1\nI0202 23:06:09.376601 1031 log.go:181] (0xc0007aa000) (0xc000714dc0) Create stream\nI0202 23:06:09.376610 1031 log.go:181] (0xc0007aa000) (0xc000714dc0) Stream added, broadcasting: 3\nI0202 23:06:09.378292 1031 log.go:181] (0xc0007aa000) Reply frame received for 3\nI0202 23:06:09.378341 1031 log.go:181] (0xc0007aa000) (0xc000715040) Create stream\nI0202 23:06:09.378363 1031 log.go:181] (0xc0007aa000) (0xc000715040) Stream added, broadcasting: 5\nI0202 23:06:09.379259 1031 log.go:181] (0xc0007aa000) Reply frame received for 5\nI0202 23:06:09.444777 1031 log.go:181] (0xc0007aa000) Data frame received for 5\nI0202 23:06:09.444816 1031 log.go:181] (0xc000715040) (5) Data frame handling\nI0202 23:06:09.444929 1031 log.go:181] (0xc000715040) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30400\nI0202 23:06:09.444991 1031 log.go:181] (0xc0007aa000) Data frame received for 5\nI0202 23:06:09.445008 1031 log.go:181] (0xc000715040) (5) Data frame handling\nI0202 23:06:09.445030 1031 log.go:181] (0xc000715040) (5) Data frame sent\nConnection to 172.18.0.12 30400 port [tcp/30400] succeeded!\nI0202 23:06:09.445903 1031 log.go:181] (0xc0007aa000) Data frame received for 5\nI0202 23:06:09.445920 1031 log.go:181] (0xc000715040) (5) Data frame handling\nI0202 23:06:09.446027 1031 log.go:181] (0xc0007aa000) Data frame received for 3\nI0202 23:06:09.446047 1031 log.go:181] (0xc000714dc0) (3) Data frame handling\nI0202 23:06:09.446965 1031 log.go:181] (0xc0007aa000) Data frame received for 1\nI0202 23:06:09.446982 1031 log.go:181] (0xc000ac0000) (1) Data frame handling\nI0202 23:06:09.446993 1031 log.go:181] (0xc000ac0000) (1) Data frame sent\nI0202 23:06:09.447014 1031 log.go:181] (0xc0007aa000) (0xc000ac0000) Stream removed, broadcasting: 1\nI0202 23:06:09.447056 1031 log.go:181] (0xc0007aa000) Go away received\nI0202 23:06:09.447280 1031 log.go:181] (0xc0007aa000) (0xc000ac0000) Stream removed, broadcasting: 1\nI0202 23:06:09.447291 1031 log.go:181] (0xc0007aa000) (0xc000714dc0) Stream removed, broadcasting: 3\nI0202 23:06:09.447298 1031 log.go:181] (0xc0007aa000) (0xc000715040) Stream removed, broadcasting: 5\n" Feb 2 23:06:09.452: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:06:09.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3786" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:12.368 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":309,"completed":99,"skipped":1625,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:06:09.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:06:25.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-546" for this suite. • [SLOW TEST:16.504 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":309,"completed":100,"skipped":1649,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:06:25.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:06:30.188: INFO: Waiting up to 5m0s for pod "client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0" in namespace "pods-285" to be "Succeeded or Failed" Feb 2 23:06:30.197: INFO: Pod "client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.85657ms Feb 2 23:06:32.202: INFO: Pod "client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014070467s Feb 2 23:06:34.206: INFO: Pod "client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0": Phase="Running", Reason="", readiness=true. Elapsed: 4.018182548s Feb 2 23:06:36.211: INFO: Pod "client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023104215s STEP: Saw pod success Feb 2 23:06:36.211: INFO: Pod "client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0" satisfied condition "Succeeded or Failed" Feb 2 23:06:36.214: INFO: Trying to get logs from node leguer-worker2 pod client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0 container env3cont: STEP: delete the pod Feb 2 23:06:36.272: INFO: Waiting for pod client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0 to disappear Feb 2 23:06:36.276: INFO: Pod client-envvars-d9ae8694-8258-413c-9b70-f452abecdbc0 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:06:36.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-285" for this suite. • [SLOW TEST:10.316 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":309,"completed":101,"skipped":1657,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:06:36.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:06:36.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4010 version' Feb 2 23:06:36.533: INFO: stderr: "" Feb 2 23:06:36.533: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.1\", GitCommit:\"c4d752765b3bbac2237bf87cf0b1c2e307844666\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:09:25Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.0\", GitCommit:\"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38\", GitTreeState:\"clean\", BuildDate:\"2020-12-08T22:31:47Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:06:36.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4010" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":309,"completed":102,"skipped":1664,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:06:36.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-2994 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 2 23:06:36.652: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 2 23:06:36.690: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:06:38.695: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:06:40.694: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:42.756: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:44.714: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:46.695: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:48.694: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:50.696: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:52.695: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:54.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:06:56.720: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 2 23:06:56.726: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 2 23:07:00.792: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 2 23:07:00.792: INFO: Going to poll 10.244.2.22 on port 8080 at least 0 times, with a maximum of 34 tries before failing Feb 2 23:07:00.795: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.22:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2994 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:07:00.795: INFO: >>> kubeConfig: /root/.kube/config I0202 23:07:00.828188 7 log.go:181] (0xc004450370) (0xc0025fb540) Create stream I0202 23:07:00.828218 7 log.go:181] (0xc004450370) (0xc0025fb540) Stream added, broadcasting: 1 I0202 23:07:00.830152 7 log.go:181] (0xc004450370) Reply frame received for 1 I0202 23:07:00.830185 7 log.go:181] (0xc004450370) (0xc003374be0) Create stream I0202 23:07:00.830198 7 log.go:181] (0xc004450370) (0xc003374be0) Stream added, broadcasting: 3 I0202 23:07:00.831121 7 log.go:181] (0xc004450370) Reply frame received for 3 I0202 23:07:00.831162 7 log.go:181] (0xc004450370) (0xc002efc280) Create stream I0202 23:07:00.831179 7 log.go:181] (0xc004450370) (0xc002efc280) Stream added, broadcasting: 5 I0202 23:07:00.832124 7 log.go:181] (0xc004450370) Reply frame received for 5 I0202 23:07:00.907414 7 log.go:181] (0xc004450370) Data frame received for 3 I0202 23:07:00.907439 7 log.go:181] (0xc003374be0) (3) Data frame handling I0202 23:07:00.907454 7 log.go:181] (0xc003374be0) (3) Data frame sent I0202 23:07:00.907630 7 log.go:181] (0xc004450370) Data frame received for 5 I0202 23:07:00.907645 7 log.go:181] (0xc002efc280) (5) Data frame handling I0202 23:07:00.907692 7 log.go:181] (0xc004450370) Data frame received for 3 I0202 23:07:00.907722 7 log.go:181] (0xc003374be0) (3) Data frame handling I0202 23:07:00.909549 7 log.go:181] (0xc004450370) Data frame received for 1 I0202 23:07:00.909581 7 log.go:181] (0xc0025fb540) (1) Data frame handling I0202 23:07:00.909617 7 log.go:181] (0xc0025fb540) (1) Data frame sent I0202 23:07:00.909689 7 log.go:181] (0xc004450370) (0xc0025fb540) Stream removed, broadcasting: 1 I0202 23:07:00.909772 7 log.go:181] (0xc004450370) Go away received I0202 23:07:00.909879 7 log.go:181] (0xc004450370) (0xc0025fb540) Stream removed, broadcasting: 1 I0202 23:07:00.909913 7 log.go:181] (0xc004450370) (0xc003374be0) Stream removed, broadcasting: 3 I0202 23:07:00.909945 7 log.go:181] (0xc004450370) (0xc002efc280) Stream removed, broadcasting: 5 Feb 2 23:07:00.909: INFO: Found all 1 expected endpoints: [netserver-0] Feb 2 23:07:00.910: INFO: Going to poll 10.244.1.206 on port 8080 at least 0 times, with a maximum of 34 tries before failing Feb 2 23:07:00.913: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.206:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2994 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:07:00.913: INFO: >>> kubeConfig: /root/.kube/config I0202 23:07:00.939969 7 log.go:181] (0xc00229e420) (0xc000ac5180) Create stream I0202 23:07:00.939996 7 log.go:181] (0xc00229e420) (0xc000ac5180) Stream added, broadcasting: 1 I0202 23:07:00.941656 7 log.go:181] (0xc00229e420) Reply frame received for 1 I0202 23:07:00.941681 7 log.go:181] (0xc00229e420) (0xc002efc320) Create stream I0202 23:07:00.941688 7 log.go:181] (0xc00229e420) (0xc002efc320) Stream added, broadcasting: 3 I0202 23:07:00.942567 7 log.go:181] (0xc00229e420) Reply frame received for 3 I0202 23:07:00.942623 7 log.go:181] (0xc00229e420) (0xc003374c80) Create stream I0202 23:07:00.942649 7 log.go:181] (0xc00229e420) (0xc003374c80) Stream added, broadcasting: 5 I0202 23:07:00.943417 7 log.go:181] (0xc00229e420) Reply frame received for 5 I0202 23:07:01.017508 7 log.go:181] (0xc00229e420) Data frame received for 3 I0202 23:07:01.017534 7 log.go:181] (0xc002efc320) (3) Data frame handling I0202 23:07:01.017560 7 log.go:181] (0xc002efc320) (3) Data frame sent I0202 23:07:01.017579 7 log.go:181] (0xc00229e420) Data frame received for 3 I0202 23:07:01.017591 7 log.go:181] (0xc002efc320) (3) Data frame handling I0202 23:07:01.017646 7 log.go:181] (0xc00229e420) Data frame received for 5 I0202 23:07:01.017665 7 log.go:181] (0xc003374c80) (5) Data frame handling I0202 23:07:01.019095 7 log.go:181] (0xc00229e420) Data frame received for 1 I0202 23:07:01.019108 7 log.go:181] (0xc000ac5180) (1) Data frame handling I0202 23:07:01.019115 7 log.go:181] (0xc000ac5180) (1) Data frame sent I0202 23:07:01.019122 7 log.go:181] (0xc00229e420) (0xc000ac5180) Stream removed, broadcasting: 1 I0202 23:07:01.019188 7 log.go:181] (0xc00229e420) Go away received I0202 23:07:01.019238 7 log.go:181] (0xc00229e420) (0xc000ac5180) Stream removed, broadcasting: 1 I0202 23:07:01.019281 7 log.go:181] (0xc00229e420) (0xc002efc320) Stream removed, broadcasting: 3 I0202 23:07:01.019297 7 log.go:181] (0xc00229e420) (0xc003374c80) Stream removed, broadcasting: 5 Feb 2 23:07:01.019: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:07:01.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2994" for this suite. • [SLOW TEST:24.485 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":103,"skipped":1673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:07:01.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-62064977-bc3c-4ed7-9f47-c0af747029ab STEP: Creating a pod to test consume secrets Feb 2 23:07:01.214: INFO: Waiting up to 5m0s for pod "pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a" in namespace "secrets-4442" to be "Succeeded or Failed" Feb 2 23:07:01.235: INFO: Pod "pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.882556ms Feb 2 23:07:03.241: INFO: Pod "pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027186788s Feb 2 23:07:05.247: INFO: Pod "pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a": Phase="Running", Reason="", readiness=true. Elapsed: 4.032886367s Feb 2 23:07:07.251: INFO: Pod "pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036609202s STEP: Saw pod success Feb 2 23:07:07.251: INFO: Pod "pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a" satisfied condition "Succeeded or Failed" Feb 2 23:07:07.283: INFO: Trying to get logs from node leguer-worker pod pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a container secret-volume-test: STEP: delete the pod Feb 2 23:07:07.438: INFO: Waiting for pod pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a to disappear Feb 2 23:07:07.450: INFO: Pod pod-secrets-8e73ee6a-fec6-481e-8bac-10fc2184c20a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:07:07.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4442" for this suite. • [SLOW TEST:6.428 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":104,"skipped":1706,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:07:07.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1102 STEP: creating service affinity-clusterip in namespace services-1102 STEP: creating replication controller affinity-clusterip in namespace services-1102 I0202 23:07:07.820137 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1102, replica count: 3 I0202 23:07:10.870642 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:07:13.870861 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:07:13.877: INFO: Creating new exec pod Feb 2 23:07:18.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1102 exec execpod-affinityxlxzl -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Feb 2 23:07:19.154: INFO: stderr: "I0202 23:07:19.050233 1068 log.go:181] (0xc000a75550) (0xc000a62960) Create stream\nI0202 23:07:19.050298 1068 log.go:181] (0xc000a75550) (0xc000a62960) Stream added, broadcasting: 1\nI0202 23:07:19.053499 1068 log.go:181] (0xc000a75550) Reply frame received for 1\nI0202 23:07:19.053571 1068 log.go:181] (0xc000a75550) (0xc000178000) Create stream\nI0202 23:07:19.053595 1068 log.go:181] (0xc000a75550) (0xc000178000) Stream added, broadcasting: 3\nI0202 23:07:19.054473 1068 log.go:181] (0xc000a75550) Reply frame received for 3\nI0202 23:07:19.054512 1068 log.go:181] (0xc000a75550) (0xc000828000) Create stream\nI0202 23:07:19.054522 1068 log.go:181] (0xc000a75550) (0xc000828000) Stream added, broadcasting: 5\nI0202 23:07:19.055290 1068 log.go:181] (0xc000a75550) Reply frame received for 5\nI0202 23:07:19.146669 1068 log.go:181] (0xc000a75550) Data frame received for 5\nI0202 23:07:19.146697 1068 log.go:181] (0xc000828000) (5) Data frame handling\nI0202 23:07:19.146716 1068 log.go:181] (0xc000828000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0202 23:07:19.146810 1068 log.go:181] (0xc000a75550) Data frame received for 5\nI0202 23:07:19.146834 1068 log.go:181] (0xc000828000) (5) Data frame handling\nI0202 23:07:19.146855 1068 log.go:181] (0xc000828000) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0202 23:07:19.147098 1068 log.go:181] (0xc000a75550) Data frame received for 3\nI0202 23:07:19.147111 1068 log.go:181] (0xc000178000) (3) Data frame handling\nI0202 23:07:19.147165 1068 log.go:181] (0xc000a75550) Data frame received for 5\nI0202 23:07:19.147176 1068 log.go:181] (0xc000828000) (5) Data frame handling\nI0202 23:07:19.149183 1068 log.go:181] (0xc000a75550) Data frame received for 1\nI0202 23:07:19.149200 1068 log.go:181] (0xc000a62960) (1) Data frame handling\nI0202 23:07:19.149217 1068 log.go:181] (0xc000a62960) (1) Data frame sent\nI0202 23:07:19.149274 1068 log.go:181] (0xc000a75550) (0xc000a62960) Stream removed, broadcasting: 1\nI0202 23:07:19.149293 1068 log.go:181] (0xc000a75550) Go away received\nI0202 23:07:19.149578 1068 log.go:181] (0xc000a75550) (0xc000a62960) Stream removed, broadcasting: 1\nI0202 23:07:19.149594 1068 log.go:181] (0xc000a75550) (0xc000178000) Stream removed, broadcasting: 3\nI0202 23:07:19.149599 1068 log.go:181] (0xc000a75550) (0xc000828000) Stream removed, broadcasting: 5\n" Feb 2 23:07:19.154: INFO: stdout: "" Feb 2 23:07:19.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1102 exec execpod-affinityxlxzl -- /bin/sh -x -c nc -zv -t -w 2 10.96.229.84 80' Feb 2 23:07:19.386: INFO: stderr: "I0202 23:07:19.308962 1086 log.go:181] (0xc00003a420) (0xc000b9e000) Create stream\nI0202 23:07:19.309022 1086 log.go:181] (0xc00003a420) (0xc000b9e000) Stream added, broadcasting: 1\nI0202 23:07:19.310999 1086 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:07:19.311050 1086 log.go:181] (0xc00003a420) (0xc000c9c000) Create stream\nI0202 23:07:19.311062 1086 log.go:181] (0xc00003a420) (0xc000c9c000) Stream added, broadcasting: 3\nI0202 23:07:19.312065 1086 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:07:19.312095 1086 log.go:181] (0xc00003a420) (0xc0007203c0) Create stream\nI0202 23:07:19.312103 1086 log.go:181] (0xc00003a420) (0xc0007203c0) Stream added, broadcasting: 5\nI0202 23:07:19.313209 1086 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:07:19.377899 1086 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.377959 1086 log.go:181] (0xc000c9c000) (3) Data frame handling\nI0202 23:07:19.377994 1086 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.378012 1086 log.go:181] (0xc0007203c0) (5) Data frame handling\nI0202 23:07:19.378026 1086 log.go:181] (0xc0007203c0) (5) Data frame sent\nI0202 23:07:19.378039 1086 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.378049 1086 log.go:181] (0xc0007203c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.229.84 80\nConnection to 10.96.229.84 80 port [tcp/http] succeeded!\nI0202 23:07:19.379195 1086 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:07:19.379222 1086 log.go:181] (0xc000b9e000) (1) Data frame handling\nI0202 23:07:19.379240 1086 log.go:181] (0xc000b9e000) (1) Data frame sent\nI0202 23:07:19.379253 1086 log.go:181] (0xc00003a420) (0xc000b9e000) Stream removed, broadcasting: 1\nI0202 23:07:19.379269 1086 log.go:181] (0xc00003a420) Go away received\nI0202 23:07:19.379696 1086 log.go:181] (0xc00003a420) (0xc000b9e000) Stream removed, broadcasting: 1\nI0202 23:07:19.379714 1086 log.go:181] (0xc00003a420) (0xc000c9c000) Stream removed, broadcasting: 3\nI0202 23:07:19.379723 1086 log.go:181] (0xc00003a420) (0xc0007203c0) Stream removed, broadcasting: 5\n" Feb 2 23:07:19.386: INFO: stdout: "" Feb 2 23:07:19.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1102 exec execpod-affinityxlxzl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.229.84:80/ ; done' Feb 2 23:07:19.694: INFO: stderr: "I0202 23:07:19.535910 1104 log.go:181] (0xc00003a420) (0xc0005ac0a0) Create stream\nI0202 23:07:19.536034 1104 log.go:181] (0xc00003a420) (0xc0005ac0a0) Stream added, broadcasting: 1\nI0202 23:07:19.538980 1104 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:07:19.539020 1104 log.go:181] (0xc00003a420) (0xc0005ac780) Create stream\nI0202 23:07:19.539032 1104 log.go:181] (0xc00003a420) (0xc0005ac780) Stream added, broadcasting: 3\nI0202 23:07:19.540086 1104 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:07:19.540129 1104 log.go:181] (0xc00003a420) (0xc0005561e0) Create stream\nI0202 23:07:19.540146 1104 log.go:181] (0xc00003a420) (0xc0005561e0) Stream added, broadcasting: 5\nI0202 23:07:19.541320 1104 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:07:19.599990 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.600040 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.600061 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.600092 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.600106 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.600130 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.604680 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.604692 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.604702 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.605688 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.605700 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.605711 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.605736 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.605761 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.605787 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.611835 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.611853 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.611865 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.612814 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.612972 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.612995 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.613018 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.613038 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.613047 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.616472 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.616493 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.616511 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.616924 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.616952 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.616971 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.616993 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.617022 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.617051 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.623587 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.623623 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.623644 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.624379 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.624407 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.624434 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.624464 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.624497 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.624541 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.630922 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.630943 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.630959 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.632290 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.632364 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.632421 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.633029 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.633046 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.633059 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.636670 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.636685 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.636698 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.637245 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.637262 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.637276 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.637318 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.637340 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.637354 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\nI0202 23:07:19.641050 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.641065 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.641078 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.641500 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.641522 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.641545 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.641556 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.641567 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.641577 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.645322 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.645345 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.645380 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.645732 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.645754 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.645778 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\nI0202 23:07:19.645794 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.645806 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.645825 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.649199 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.649217 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.649235 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.649489 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.649510 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.649526 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.649543 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.649550 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.649570 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.654285 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.654306 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.654319 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.654775 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.654815 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.654842 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.654874 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.654889 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.654898 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.658597 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.658622 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.658643 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.659602 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.659627 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.659639 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.659663 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.659672 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.659680 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.663386 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.663399 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.663408 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.663888 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.663899 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.663905 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.663927 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.663943 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.663953 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.670065 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.670076 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.670082 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.670670 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.670679 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.670685 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.670715 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.670744 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.670764 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\nI0202 23:07:19.670777 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.670787 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.670816 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\nI0202 23:07:19.674682 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.674705 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.674716 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.675402 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.675431 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.675463 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.675485 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.675509 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.675524 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.681524 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.681563 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.681585 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.682235 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.682263 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.682278 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.682304 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.682333 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.682348 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\nI0202 23:07:19.682361 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.682373 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.229.84:80/\nI0202 23:07:19.682400 1104 log.go:181] (0xc0005561e0) (5) Data frame sent\nI0202 23:07:19.686591 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.686618 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.686640 1104 log.go:181] (0xc0005ac780) (3) Data frame sent\nI0202 23:07:19.687274 1104 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:07:19.687320 1104 log.go:181] (0xc0005ac780) (3) Data frame handling\nI0202 23:07:19.687350 1104 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:07:19.687371 1104 log.go:181] (0xc0005561e0) (5) Data frame handling\nI0202 23:07:19.689531 1104 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:07:19.689547 1104 log.go:181] (0xc0005ac0a0) (1) Data frame handling\nI0202 23:07:19.689553 1104 log.go:181] (0xc0005ac0a0) (1) Data frame sent\nI0202 23:07:19.689560 1104 log.go:181] (0xc00003a420) (0xc0005ac0a0) Stream removed, broadcasting: 1\nI0202 23:07:19.689568 1104 log.go:181] (0xc00003a420) Go away received\nI0202 23:07:19.690076 1104 log.go:181] (0xc00003a420) (0xc0005ac0a0) Stream removed, broadcasting: 1\nI0202 23:07:19.690093 1104 log.go:181] (0xc00003a420) (0xc0005ac780) Stream removed, broadcasting: 3\nI0202 23:07:19.690101 1104 log.go:181] (0xc00003a420) (0xc0005561e0) Stream removed, broadcasting: 5\n" Feb 2 23:07:19.694: INFO: stdout: "\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl\naffinity-clusterip-ms4sl" Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Received response from host: affinity-clusterip-ms4sl Feb 2 23:07:19.694: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1102, will wait for the garbage collector to delete the pods Feb 2 23:07:19.832: INFO: Deleting ReplicationController affinity-clusterip took: 6.426278ms Feb 2 23:07:20.432: INFO: Terminating ReplicationController affinity-clusterip pods took: 600.21655ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:08:10.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1102" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:62.845 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":105,"skipped":1714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:08:10.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-7368 STEP: creating service affinity-clusterip-transition in namespace services-7368 STEP: creating replication controller affinity-clusterip-transition in namespace services-7368 I0202 23:08:10.423124 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7368, replica count: 3 I0202 23:08:13.473524 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:08:16.473849 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:08:16.481: INFO: Creating new exec pod Feb 2 23:08:21.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7368 exec execpod-affinityjwx9d -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Feb 2 23:08:21.769: INFO: stderr: "I0202 23:08:21.671707 1122 log.go:181] (0xc00003a420) (0xc00030a1e0) Create stream\nI0202 23:08:21.671789 1122 log.go:181] (0xc00003a420) (0xc00030a1e0) Stream added, broadcasting: 1\nI0202 23:08:21.675606 1122 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:08:21.675635 1122 log.go:181] (0xc00003a420) (0xc00019dc20) Create stream\nI0202 23:08:21.675643 1122 log.go:181] (0xc00003a420) (0xc00019dc20) Stream added, broadcasting: 3\nI0202 23:08:21.676552 1122 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:08:21.676596 1122 log.go:181] (0xc00003a420) (0xc000bb03c0) Create stream\nI0202 23:08:21.676618 1122 log.go:181] (0xc00003a420) (0xc000bb03c0) Stream added, broadcasting: 5\nI0202 23:08:21.677545 1122 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:08:21.761235 1122 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:08:21.761351 1122 log.go:181] (0xc000bb03c0) (5) Data frame handling\nI0202 23:08:21.761377 1122 log.go:181] (0xc000bb03c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0202 23:08:21.761825 1122 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:08:21.761837 1122 log.go:181] (0xc000bb03c0) (5) Data frame handling\nI0202 23:08:21.761843 1122 log.go:181] (0xc000bb03c0) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0202 23:08:21.761952 1122 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:08:21.761990 1122 log.go:181] (0xc00019dc20) (3) Data frame handling\nI0202 23:08:21.762014 1122 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:08:21.762027 1122 log.go:181] (0xc000bb03c0) (5) Data frame handling\nI0202 23:08:21.763530 1122 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:08:21.763552 1122 log.go:181] (0xc00030a1e0) (1) Data frame handling\nI0202 23:08:21.763569 1122 log.go:181] (0xc00030a1e0) (1) Data frame sent\nI0202 23:08:21.763609 1122 log.go:181] (0xc00003a420) (0xc00030a1e0) Stream removed, broadcasting: 1\nI0202 23:08:21.763825 1122 log.go:181] (0xc00003a420) Go away received\nI0202 23:08:21.763928 1122 log.go:181] (0xc00003a420) (0xc00030a1e0) Stream removed, broadcasting: 1\nI0202 23:08:21.763941 1122 log.go:181] (0xc00003a420) (0xc00019dc20) Stream removed, broadcasting: 3\nI0202 23:08:21.763948 1122 log.go:181] (0xc00003a420) (0xc000bb03c0) Stream removed, broadcasting: 5\n" Feb 2 23:08:21.769: INFO: stdout: "" Feb 2 23:08:21.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7368 exec execpod-affinityjwx9d -- /bin/sh -x -c nc -zv -t -w 2 10.96.199.54 80' Feb 2 23:08:21.970: INFO: stderr: "I0202 23:08:21.895999 1140 log.go:181] (0xc00003a420) (0xc000c6e000) Create stream\nI0202 23:08:21.896050 1140 log.go:181] (0xc00003a420) (0xc000c6e000) Stream added, broadcasting: 1\nI0202 23:08:21.898082 1140 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:08:21.898143 1140 log.go:181] (0xc00003a420) (0xc0008ae1e0) Create stream\nI0202 23:08:21.898170 1140 log.go:181] (0xc00003a420) (0xc0008ae1e0) Stream added, broadcasting: 3\nI0202 23:08:21.898978 1140 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:08:21.899015 1140 log.go:181] (0xc00003a420) (0xc0008ae280) Create stream\nI0202 23:08:21.899023 1140 log.go:181] (0xc00003a420) (0xc0008ae280) Stream added, broadcasting: 5\nI0202 23:08:21.899792 1140 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:08:21.962781 1140 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:08:21.962816 1140 log.go:181] (0xc0008ae1e0) (3) Data frame handling\nI0202 23:08:21.962921 1140 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:08:21.962943 1140 log.go:181] (0xc0008ae280) (5) Data frame handling\nI0202 23:08:21.962961 1140 log.go:181] (0xc0008ae280) (5) Data frame sent\nI0202 23:08:21.962977 1140 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:08:21.962991 1140 log.go:181] (0xc0008ae280) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.199.54 80\nConnection to 10.96.199.54 80 port [tcp/http] succeeded!\nI0202 23:08:21.964966 1140 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:08:21.965001 1140 log.go:181] (0xc000c6e000) (1) Data frame handling\nI0202 23:08:21.965032 1140 log.go:181] (0xc000c6e000) (1) Data frame sent\nI0202 23:08:21.965079 1140 log.go:181] (0xc00003a420) (0xc000c6e000) Stream removed, broadcasting: 1\nI0202 23:08:21.965293 1140 log.go:181] (0xc00003a420) Go away received\nI0202 23:08:21.965646 1140 log.go:181] (0xc00003a420) (0xc000c6e000) Stream removed, broadcasting: 1\nI0202 23:08:21.965671 1140 log.go:181] (0xc00003a420) (0xc0008ae1e0) Stream removed, broadcasting: 3\nI0202 23:08:21.965684 1140 log.go:181] (0xc00003a420) (0xc0008ae280) Stream removed, broadcasting: 5\n" Feb 2 23:08:21.971: INFO: stdout: "" Feb 2 23:08:21.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7368 exec execpod-affinityjwx9d -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.199.54:80/ ; done' Feb 2 23:08:22.286: INFO: stderr: "I0202 23:08:22.107891 1159 log.go:181] (0xc00018cdc0) (0xc000a3e3c0) Create stream\nI0202 23:08:22.107960 1159 log.go:181] (0xc00018cdc0) (0xc000a3e3c0) Stream added, broadcasting: 1\nI0202 23:08:22.110714 1159 log.go:181] (0xc00018cdc0) Reply frame received for 1\nI0202 23:08:22.110774 1159 log.go:181] (0xc00018cdc0) (0xc00082e0a0) Create stream\nI0202 23:08:22.110805 1159 log.go:181] (0xc00018cdc0) (0xc00082e0a0) Stream added, broadcasting: 3\nI0202 23:08:22.111760 1159 log.go:181] (0xc00018cdc0) Reply frame received for 3\nI0202 23:08:22.111787 1159 log.go:181] (0xc00018cdc0) (0xc00082e8c0) Create stream\nI0202 23:08:22.111795 1159 log.go:181] (0xc00018cdc0) (0xc00082e8c0) Stream added, broadcasting: 5\nI0202 23:08:22.112612 1159 log.go:181] (0xc00018cdc0) Reply frame received for 5\nI0202 23:08:22.187402 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.187442 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.187456 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\nI0202 23:08:22.187465 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.187472 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.187484 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.187491 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.187498 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.187506 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.187511 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.187535 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\nI0202 23:08:22.187561 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.189533 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.189545 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.189555 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.190062 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.190081 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.190091 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.190108 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.190123 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.190133 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.193767 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.193781 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.193792 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.194191 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.194209 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.194216 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.194227 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.194232 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.194237 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.198235 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.198264 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.198288 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.198623 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.198644 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.198664 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.198672 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.198685 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.198693 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.203458 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.203471 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.203483 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.203979 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.203994 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.204004 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.204013 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.204025 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.204050 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.208620 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.208637 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.208655 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.209335 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.209369 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.209381 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.209396 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.209404 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.209412 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.213845 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.213862 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.213880 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.214783 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.214807 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.214817 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.214838 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.214853 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.214869 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.219746 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.219768 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.219787 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.220150 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.220181 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.220202 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.220233 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.220246 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.220264 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\nI0202 23:08:22.225196 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.225208 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.225215 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.226188 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.226214 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.226226 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.226243 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.226253 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.226269 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.231633 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.231675 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.231721 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.232006 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.232049 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.232077 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.232103 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.232115 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.232137 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.237697 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.237718 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.237736 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.238333 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.238364 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.238392 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.238421 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.238459 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.238485 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.243060 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.243081 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.243101 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.244011 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.244034 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.244045 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.244071 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.244091 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.244119 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.250175 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.250198 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.250217 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.251094 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.251132 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.251151 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.251172 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.251185 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.251216 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.258564 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.258588 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.258607 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.259540 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.259571 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.259591 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.259619 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.259631 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.259648 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.265997 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.266023 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.266042 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.266786 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.266809 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.266824 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.266834 1159 log.go:181] (0xc00082e8c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.266848 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.266855 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.274298 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.274331 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.274367 1159 log.go:181] (0xc00082e0a0) (3) Data frame sent\nI0202 23:08:22.275367 1159 log.go:181] (0xc00018cdc0) Data frame received for 5\nI0202 23:08:22.275391 1159 log.go:181] (0xc00082e8c0) (5) Data frame handling\nI0202 23:08:22.275434 1159 log.go:181] (0xc00018cdc0) Data frame received for 3\nI0202 23:08:22.275479 1159 log.go:181] (0xc00082e0a0) (3) Data frame handling\nI0202 23:08:22.277530 1159 log.go:181] (0xc00018cdc0) Data frame received for 1\nI0202 23:08:22.277550 1159 log.go:181] (0xc000a3e3c0) (1) Data frame handling\nI0202 23:08:22.277561 1159 log.go:181] (0xc000a3e3c0) (1) Data frame sent\nI0202 23:08:22.277581 1159 log.go:181] (0xc00018cdc0) (0xc000a3e3c0) Stream removed, broadcasting: 1\nI0202 23:08:22.277598 1159 log.go:181] (0xc00018cdc0) Go away received\nI0202 23:08:22.278095 1159 log.go:181] (0xc00018cdc0) (0xc000a3e3c0) Stream removed, broadcasting: 1\nI0202 23:08:22.278124 1159 log.go:181] (0xc00018cdc0) (0xc00082e0a0) Stream removed, broadcasting: 3\nI0202 23:08:22.278138 1159 log.go:181] (0xc00018cdc0) (0xc00082e8c0) Stream removed, broadcasting: 5\n" Feb 2 23:08:22.286: INFO: stdout: "\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-zj4bz\naffinity-clusterip-transition-zj4bz\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-zj4bz\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-bn88k\naffinity-clusterip-transition-zj4bz" Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-zj4bz Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-zj4bz Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-zj4bz Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-bn88k Feb 2 23:08:22.286: INFO: Received response from host: affinity-clusterip-transition-zj4bz Feb 2 23:08:22.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7368 exec execpod-affinityjwx9d -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.199.54:80/ ; done' Feb 2 23:08:22.633: INFO: stderr: "I0202 23:08:22.436814 1176 log.go:181] (0xc0006398c0) (0xc000630a00) Create stream\nI0202 23:08:22.436977 1176 log.go:181] (0xc0006398c0) (0xc000630a00) Stream added, broadcasting: 1\nI0202 23:08:22.439198 1176 log.go:181] (0xc0006398c0) Reply frame received for 1\nI0202 23:08:22.439228 1176 log.go:181] (0xc0006398c0) (0xc000630aa0) Create stream\nI0202 23:08:22.439237 1176 log.go:181] (0xc0006398c0) (0xc000630aa0) Stream added, broadcasting: 3\nI0202 23:08:22.440178 1176 log.go:181] (0xc0006398c0) Reply frame received for 3\nI0202 23:08:22.440214 1176 log.go:181] (0xc0006398c0) (0xc000762280) Create stream\nI0202 23:08:22.440232 1176 log.go:181] (0xc0006398c0) (0xc000762280) Stream added, broadcasting: 5\nI0202 23:08:22.441114 1176 log.go:181] (0xc0006398c0) Reply frame received for 5\nI0202 23:08:22.510211 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.510275 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.510307 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.510357 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.510375 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.510382 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.516695 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.516719 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.516748 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.516841 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.516862 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.516968 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.517112 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.517127 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.517134 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.522165 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.522181 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.522193 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.522458 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.522472 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.522480 1176 log.go:181] (0xc000762280) (5) Data frame sent\nI0202 23:08:22.522486 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.522492 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.522501 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.522508 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0202 23:08:22.522517 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\n http://10.96.199.54:80/\nI0202 23:08:22.522539 1176 log.go:181] (0xc000762280) (5) Data frame sent\nI0202 23:08:22.529918 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.529939 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.529952 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.530765 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.530784 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.530792 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.530909 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.530931 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.530958 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.537722 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.537747 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.537789 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.538638 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.538660 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.538678 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.538697 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.538716 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.538728 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.545068 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.545089 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.545101 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.545650 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.545669 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.545679 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.545694 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.545702 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.545711 1176 log.go:181] (0xc000762280) (5) Data frame sent\nI0202 23:08:22.545719 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.545726 1176 log.go:181] (0xc000762280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.545744 1176 log.go:181] (0xc000762280) (5) Data frame sent\nI0202 23:08:22.552247 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.552265 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.552286 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.553208 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.553233 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.553247 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.553272 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.553304 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.553332 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.559433 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.559453 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.559468 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.559903 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.559937 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.559956 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.559978 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.559992 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.560013 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.565456 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.565480 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.565501 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.566362 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.566387 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.566410 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.566456 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.566475 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.566494 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.570016 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.570042 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.570065 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.570242 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.570253 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.570259 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.570290 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.570318 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.570338 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.577899 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.577937 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.577976 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.578088 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.578102 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.578121 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.578292 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.578312 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.578327 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.584519 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.584560 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.584594 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.585254 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.585279 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.585291 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.585306 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.585313 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.585322 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.592818 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.592950 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.592975 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.593664 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.593674 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.593680 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.593689 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.593696 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.593704 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.599004 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.599051 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.599128 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.599460 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.599480 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.599494 1176 log.go:181] (0xc000762280) (5) Data frame sent\nI0202 23:08:22.599506 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.599515 1176 log.go:181] (0xc000762280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.599534 1176 log.go:181] (0xc000762280) (5) Data frame sent\nI0202 23:08:22.599743 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.599754 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.599762 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.606964 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.606977 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.606985 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.607733 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.607778 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.607791 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.607806 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.607815 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.607829 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.615371 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.615398 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.615425 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.615936 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.615949 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.615957 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.615970 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.615992 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.615998 1176 log.go:181] (0xc000762280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.199.54:80/\nI0202 23:08:22.623544 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.623580 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.623610 1176 log.go:181] (0xc000630aa0) (3) Data frame sent\nI0202 23:08:22.624657 1176 log.go:181] (0xc0006398c0) Data frame received for 3\nI0202 23:08:22.624680 1176 log.go:181] (0xc000630aa0) (3) Data frame handling\nI0202 23:08:22.624712 1176 log.go:181] (0xc0006398c0) Data frame received for 5\nI0202 23:08:22.624750 1176 log.go:181] (0xc000762280) (5) Data frame handling\nI0202 23:08:22.627008 1176 log.go:181] (0xc0006398c0) Data frame received for 1\nI0202 23:08:22.627038 1176 log.go:181] (0xc000630a00) (1) Data frame handling\nI0202 23:08:22.627066 1176 log.go:181] (0xc000630a00) (1) Data frame sent\nI0202 23:08:22.627087 1176 log.go:181] (0xc0006398c0) (0xc000630a00) Stream removed, broadcasting: 1\nI0202 23:08:22.627187 1176 log.go:181] (0xc0006398c0) Go away received\nI0202 23:08:22.627553 1176 log.go:181] (0xc0006398c0) (0xc000630a00) Stream removed, broadcasting: 1\nI0202 23:08:22.627572 1176 log.go:181] (0xc0006398c0) (0xc000630aa0) Stream removed, broadcasting: 3\nI0202 23:08:22.627582 1176 log.go:181] (0xc0006398c0) (0xc000762280) Stream removed, broadcasting: 5\n" Feb 2 23:08:22.634: INFO: stdout: "\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8\naffinity-clusterip-transition-lhhk8" Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Received response from host: affinity-clusterip-transition-lhhk8 Feb 2 23:08:22.634: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7368, will wait for the garbage collector to delete the pods Feb 2 23:08:22.937: INFO: Deleting ReplicationController affinity-clusterip-transition took: 206.904822ms Feb 2 23:08:23.338: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.68737ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:09:10.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7368" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:60.136 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":106,"skipped":1769,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:09:10.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7970.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7970.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7970.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7970.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7970.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7970.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:09:18.766: INFO: DNS probes using dns-7970/dns-test-3fa12832-b679-4e84-82f4-bc531019ab9e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:09:18.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7970" for this suite. • [SLOW TEST:9.098 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":309,"completed":107,"skipped":1777,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:09:19.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-114c740b-ccc7-4136-8f31-8adb79774e40 STEP: Creating a pod to test consume secrets Feb 2 23:09:19.659: INFO: Waiting up to 5m0s for pod "pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f" in namespace "secrets-7648" to be "Succeeded or Failed" Feb 2 23:09:19.687: INFO: Pod "pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.917869ms Feb 2 23:09:21.692: INFO: Pod "pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032128631s Feb 2 23:09:23.696: INFO: Pod "pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036484073s Feb 2 23:09:25.701: INFO: Pod "pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041428941s STEP: Saw pod success Feb 2 23:09:25.701: INFO: Pod "pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f" satisfied condition "Succeeded or Failed" Feb 2 23:09:25.704: INFO: Trying to get logs from node leguer-worker pod pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f container secret-env-test: STEP: delete the pod Feb 2 23:09:25.765: INFO: Waiting for pod pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f to disappear Feb 2 23:09:25.771: INFO: Pod pod-secrets-3f351b15-8bb5-4132-9612-e2afde13945f no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:09:25.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7648" for this suite. • [SLOW TEST:6.242 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":309,"completed":108,"skipped":1780,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:09:25.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:09:43.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5484" for this suite. • [SLOW TEST:18.072 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":309,"completed":109,"skipped":1789,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:09:43.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:09:43.971: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 2 23:09:47.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 --namespace=crd-publish-openapi-8188 create -f -' Feb 2 23:09:51.312: INFO: stderr: "" Feb 2 23:09:51.312: INFO: stdout: "e2e-test-crd-publish-openapi-4390-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 2 23:09:51.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 --namespace=crd-publish-openapi-8188 delete e2e-test-crd-publish-openapi-4390-crds test-cr' Feb 2 23:09:51.411: INFO: stderr: "" Feb 2 23:09:51.411: INFO: stdout: "e2e-test-crd-publish-openapi-4390-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 2 23:09:51.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 --namespace=crd-publish-openapi-8188 apply -f -' Feb 2 23:09:51.730: INFO: stderr: "" Feb 2 23:09:51.730: INFO: stdout: "e2e-test-crd-publish-openapi-4390-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 2 23:09:51.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 --namespace=crd-publish-openapi-8188 delete e2e-test-crd-publish-openapi-4390-crds test-cr' Feb 2 23:09:51.836: INFO: stderr: "" Feb 2 23:09:51.836: INFO: stdout: "e2e-test-crd-publish-openapi-4390-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 2 23:09:51.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8188 explain e2e-test-crd-publish-openapi-4390-crds' Feb 2 23:09:52.130: INFO: stderr: "" Feb 2 23:09:52.130: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4390-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:09:55.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8188" for this suite. • [SLOW TEST:11.847 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":309,"completed":110,"skipped":1789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:09:55.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-4fcd08fc-5f81-44af-9b16-9eec13be51be STEP: Creating a pod to test consume secrets Feb 2 23:09:55.840: INFO: Waiting up to 5m0s for pod "pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174" in namespace "secrets-5024" to be "Succeeded or Failed" Feb 2 23:09:55.859: INFO: Pod "pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174": Phase="Pending", Reason="", readiness=false. Elapsed: 18.587875ms Feb 2 23:09:57.863: INFO: Pod "pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02314911s Feb 2 23:09:59.868: INFO: Pod "pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027498245s Feb 2 23:10:01.872: INFO: Pod "pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031759773s STEP: Saw pod success Feb 2 23:10:01.872: INFO: Pod "pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174" satisfied condition "Succeeded or Failed" Feb 2 23:10:01.875: INFO: Trying to get logs from node leguer-worker pod pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174 container secret-volume-test: STEP: delete the pod Feb 2 23:10:01.900: INFO: Waiting for pod pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174 to disappear Feb 2 23:10:01.919: INFO: Pod pod-secrets-aaa7b993-0c0e-40be-a54b-50720f68b174 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:10:01.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5024" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":111,"skipped":1842,"failed":0} [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:10:01.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:10:04.037: INFO: Deleting pod "var-expansion-1101d077-afb0-47f8-a176-5a4d69496a60" in namespace "var-expansion-1236" Feb 2 23:10:04.073: INFO: Wait up to 5m0s for pod "var-expansion-1101d077-afb0-47f8-a176-5a4d69496a60" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:11:10.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1236" for this suite. • [SLOW TEST:68.167 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":309,"completed":112,"skipped":1842,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:11:10.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 2 23:11:10.220: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:11:10.230: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:11:10.233: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Feb 2 23:11:10.240: INFO: rally-0a12c122-7dnmol6z-vwbwf from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-0a12c122-fagfvvpw-sskvj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:54 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-0a12c122-iqj2mcat-2hfpj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-0a12c122-iqj2mcat-swp7f from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:11:10.240: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:11:10.240: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container chaos-mesh ready: true, restart count 0 Feb 2 23:11:10.240: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:11:10.240: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:11:10.240: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.240: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:11:10.240: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Feb 2 23:11:10.248: INFO: rally-0a12c122-4xacdhsf-44v5r from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-0a12c122-4xacdhsf-5c974 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-0a12c122-7dnmol6z-n9ztn from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-0a12c122-fagfvvpw-cxsgt from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:53 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-0a12c122-lqiac6cu-6fsz6 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-0a12c122-lqiac6cu-99jsp from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:11:10.248: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:11:10.248: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:11:10.248: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:11:10.248: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Feb 2 23:11:10.248: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1660111a122574c3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:11:11.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1009" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":309,"completed":113,"skipped":1846,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:11:11.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-d8d44e22-5471-4997-a2a0-bc4bcd5f0ade STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d8d44e22-5471-4997-a2a0-bc4bcd5f0ade STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:11:17.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8167" for this suite. • [SLOW TEST:6.200 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":114,"skipped":1859,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:11:17.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:11:17.603: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 2 23:11:21.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 create -f -' Feb 2 23:11:26.024: INFO: stderr: "" Feb 2 23:11:26.024: INFO: stdout: "e2e-test-crd-publish-openapi-898-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 2 23:11:26.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 delete e2e-test-crd-publish-openapi-898-crds test-foo' Feb 2 23:11:26.145: INFO: stderr: "" Feb 2 23:11:26.145: INFO: stdout: "e2e-test-crd-publish-openapi-898-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 2 23:11:26.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 apply -f -' Feb 2 23:11:26.431: INFO: stderr: "" Feb 2 23:11:26.431: INFO: stdout: "e2e-test-crd-publish-openapi-898-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 2 23:11:26.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 delete e2e-test-crd-publish-openapi-898-crds test-foo' Feb 2 23:11:26.550: INFO: stderr: "" Feb 2 23:11:26.551: INFO: stdout: "e2e-test-crd-publish-openapi-898-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 2 23:11:26.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 create -f -' Feb 2 23:11:26.854: INFO: rc: 1 Feb 2 23:11:26.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 apply -f -' Feb 2 23:11:27.137: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 2 23:11:27.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 create -f -' Feb 2 23:11:27.424: INFO: rc: 1 Feb 2 23:11:27.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 --namespace=crd-publish-openapi-9416 apply -f -' Feb 2 23:11:27.736: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 2 23:11:27.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 explain e2e-test-crd-publish-openapi-898-crds' Feb 2 23:11:28.021: INFO: stderr: "" Feb 2 23:11:28.021: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-898-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 2 23:11:28.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 explain e2e-test-crd-publish-openapi-898-crds.metadata' Feb 2 23:11:28.393: INFO: stderr: "" Feb 2 23:11:28.394: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-898-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 2 23:11:28.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 explain e2e-test-crd-publish-openapi-898-crds.spec' Feb 2 23:11:28.672: INFO: stderr: "" Feb 2 23:11:28.672: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-898-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 2 23:11:28.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 explain e2e-test-crd-publish-openapi-898-crds.spec.bars' Feb 2 23:11:28.934: INFO: stderr: "" Feb 2 23:11:28.934: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-898-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 2 23:11:28.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9416 explain e2e-test-crd-publish-openapi-898-crds.spec.bars2' Feb 2 23:11:29.237: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:11:31.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9416" for this suite. • [SLOW TEST:13.797 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":309,"completed":115,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:11:31.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5643.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5643.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5643.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5643.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5643.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.14_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5643.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5643.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5643.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5643.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5643.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5643.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.14_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:11:37.558: INFO: Unable to read wheezy_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.563: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.565: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.580: INFO: Unable to read jessie_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.585: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.589: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:37.606: INFO: Lookups using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 failed for: [wheezy_udp@dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_udp@dns-test-service.dns-5643.svc.cluster.local jessie_tcp@dns-test-service.dns-5643.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local] Feb 2 23:11:42.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.617: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.638: INFO: Unable to read jessie_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.641: INFO: Unable to read jessie_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.649: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.652: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:42.666: INFO: Lookups using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 failed for: [wheezy_udp@dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_udp@dns-test-service.dns-5643.svc.cluster.local jessie_tcp@dns-test-service.dns-5643.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local] Feb 2 23:11:47.639: INFO: Unable to read wheezy_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.646: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.649: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.678: INFO: Unable to read jessie_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.684: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.687: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:47.704: INFO: Lookups using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 failed for: [wheezy_udp@dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_udp@dns-test-service.dns-5643.svc.cluster.local jessie_tcp@dns-test-service.dns-5643.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local] Feb 2 23:11:52.613: INFO: Unable to read wheezy_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.619: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.622: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.624: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.640: INFO: Unable to read jessie_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.643: INFO: Unable to read jessie_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.645: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.648: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:52.666: INFO: Lookups using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 failed for: [wheezy_udp@dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_udp@dns-test-service.dns-5643.svc.cluster.local jessie_tcp@dns-test-service.dns-5643.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local] Feb 2 23:11:57.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.614: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.616: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.619: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.638: INFO: Unable to read jessie_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.655: INFO: Unable to read jessie_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.658: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:11:57.677: INFO: Lookups using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 failed for: [wheezy_udp@dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_udp@dns-test-service.dns-5643.svc.cluster.local jessie_tcp@dns-test-service.dns-5643.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local] Feb 2 23:12:02.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.615: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.617: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.642: INFO: Unable to read jessie_udp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.645: INFO: Unable to read jessie_tcp@dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.647: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local from pod dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8: the server could not find the requested resource (get pods dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8) Feb 2 23:12:02.698: INFO: Lookups using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 failed for: [wheezy_udp@dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@dns-test-service.dns-5643.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_udp@dns-test-service.dns-5643.svc.cluster.local jessie_tcp@dns-test-service.dns-5643.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5643.svc.cluster.local] Feb 2 23:12:07.666: INFO: DNS probes using dns-5643/dns-test-8dde6baf-92cc-4e3d-8512-d082fea0afb8 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:08.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5643" for this suite. • [SLOW TEST:37.214 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":309,"completed":116,"skipped":1896,"failed":0} S ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:08.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Feb 2 23:12:09.483: INFO: starting watch STEP: patching STEP: updating Feb 2 23:12:09.493: INFO: waiting for watch events with expected annotations Feb 2 23:12:09.493: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:09.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-550" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":309,"completed":117,"skipped":1897,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:09.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating pod Feb 2 23:12:14.405: INFO: Pod pod-hostip-5f8ec0f7-28ff-4582-a1b8-a4d67721c5a5 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:14.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8948" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":309,"completed":118,"skipped":1919,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:14.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:12:14.523: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:15.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9355" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":309,"completed":119,"skipped":1937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:15.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:12:15.699: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:12:17.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904335, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904335, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904335, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904335, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:12:21.088: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:21.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3528" for this suite. STEP: Destroying namespace "webhook-3528-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.179 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":309,"completed":120,"skipped":1971,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:21.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-9d40162b-c923-4328-b663-1e2b423b23aa STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:27.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5919" for this suite. • [SLOW TEST:6.564 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":121,"skipped":1988,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:27.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:12:28.561: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:12:30.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904348, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904348, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904348, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747904348, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:12:33.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 2 23:12:33.722: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:33.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6511" for this suite. STEP: Destroying namespace "webhook-6511-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.367 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":309,"completed":122,"skipped":2006,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:34.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:34.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-639" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":309,"completed":123,"skipped":2070,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:34.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override all Feb 2 23:12:34.695: INFO: Waiting up to 5m0s for pod "client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b" in namespace "containers-7940" to be "Succeeded or Failed" Feb 2 23:12:34.934: INFO: Pod "client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b": Phase="Pending", Reason="", readiness=false. Elapsed: 238.287803ms Feb 2 23:12:36.938: INFO: Pod "client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242904418s Feb 2 23:12:38.942: INFO: Pod "client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246331992s STEP: Saw pod success Feb 2 23:12:38.942: INFO: Pod "client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b" satisfied condition "Succeeded or Failed" Feb 2 23:12:38.944: INFO: Trying to get logs from node leguer-worker2 pod client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b container agnhost-container: STEP: delete the pod Feb 2 23:12:39.010: INFO: Waiting for pod client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b to disappear Feb 2 23:12:39.014: INFO: Pod client-containers-f7e71556-9b88-4423-bde9-733a4fe2453b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:39.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7940" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":309,"completed":124,"skipped":2079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:39.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 2 23:12:39.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2330 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Feb 2 23:12:39.316: INFO: stderr: "" Feb 2 23:12:39.316: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Feb 2 23:12:44.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2330 get pod e2e-test-httpd-pod -o json' Feb 2 23:12:44.481: INFO: stderr: "" Feb 2 23:12:44.481: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-02-02T23:12:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-02T23:12:39Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.42\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-02T23:12:42Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2330\",\n \"resourceVersion\": \"4179252\",\n \"uid\": \"0b143382-8996-42ed-98b4-dfb41240307c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rwxj6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rwxj6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rwxj6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-02T23:12:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-02T23:12:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-02T23:12:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-02T23:12:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://de4473a224fecf43fc437769a6c96c36c7da6ae29524ee83f05661e36fb4584c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-02-02T23:12:41Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.42\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.42\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-02-02T23:12:39Z\"\n }\n}\n" STEP: replace the image in the pod Feb 2 23:12:44.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2330 replace -f -' Feb 2 23:12:44.873: INFO: stderr: "" Feb 2 23:12:44.873: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Feb 2 23:12:44.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2330 delete pods e2e-test-httpd-pod' Feb 2 23:12:50.142: INFO: stderr: "" Feb 2 23:12:50.142: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:50.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2330" for this suite. • [SLOW TEST:11.083 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":309,"completed":125,"skipped":2105,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:50.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:12:50.711: INFO: Checking APIGroup: apiregistration.k8s.io Feb 2 23:12:50.712: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Feb 2 23:12:50.712: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.712: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Feb 2 23:12:50.712: INFO: Checking APIGroup: apps Feb 2 23:12:50.713: INFO: PreferredVersion.GroupVersion: apps/v1 Feb 2 23:12:50.713: INFO: Versions found [{apps/v1 v1}] Feb 2 23:12:50.713: INFO: apps/v1 matches apps/v1 Feb 2 23:12:50.713: INFO: Checking APIGroup: events.k8s.io Feb 2 23:12:50.714: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Feb 2 23:12:50.714: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.714: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Feb 2 23:12:50.714: INFO: Checking APIGroup: authentication.k8s.io Feb 2 23:12:50.715: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Feb 2 23:12:50.715: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.715: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Feb 2 23:12:50.715: INFO: Checking APIGroup: authorization.k8s.io Feb 2 23:12:50.715: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Feb 2 23:12:50.715: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.715: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Feb 2 23:12:50.715: INFO: Checking APIGroup: autoscaling Feb 2 23:12:50.716: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Feb 2 23:12:50.716: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Feb 2 23:12:50.716: INFO: autoscaling/v1 matches autoscaling/v1 Feb 2 23:12:50.716: INFO: Checking APIGroup: batch Feb 2 23:12:50.717: INFO: PreferredVersion.GroupVersion: batch/v1 Feb 2 23:12:50.717: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Feb 2 23:12:50.717: INFO: batch/v1 matches batch/v1 Feb 2 23:12:50.717: INFO: Checking APIGroup: certificates.k8s.io Feb 2 23:12:50.718: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Feb 2 23:12:50.718: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.718: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Feb 2 23:12:50.718: INFO: Checking APIGroup: networking.k8s.io Feb 2 23:12:50.719: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Feb 2 23:12:50.719: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.719: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Feb 2 23:12:50.719: INFO: Checking APIGroup: extensions Feb 2 23:12:50.719: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Feb 2 23:12:50.719: INFO: Versions found [{extensions/v1beta1 v1beta1}] Feb 2 23:12:50.719: INFO: extensions/v1beta1 matches extensions/v1beta1 Feb 2 23:12:50.719: INFO: Checking APIGroup: policy Feb 2 23:12:50.720: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Feb 2 23:12:50.720: INFO: Versions found [{policy/v1beta1 v1beta1}] Feb 2 23:12:50.720: INFO: policy/v1beta1 matches policy/v1beta1 Feb 2 23:12:50.720: INFO: Checking APIGroup: rbac.authorization.k8s.io Feb 2 23:12:50.721: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Feb 2 23:12:50.721: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.721: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Feb 2 23:12:50.721: INFO: Checking APIGroup: storage.k8s.io Feb 2 23:12:50.722: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Feb 2 23:12:50.722: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.722: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Feb 2 23:12:50.722: INFO: Checking APIGroup: admissionregistration.k8s.io Feb 2 23:12:50.723: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Feb 2 23:12:50.723: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.723: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Feb 2 23:12:50.723: INFO: Checking APIGroup: apiextensions.k8s.io Feb 2 23:12:50.723: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Feb 2 23:12:50.723: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.723: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Feb 2 23:12:50.723: INFO: Checking APIGroup: scheduling.k8s.io Feb 2 23:12:50.724: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Feb 2 23:12:50.724: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.724: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Feb 2 23:12:50.724: INFO: Checking APIGroup: coordination.k8s.io Feb 2 23:12:50.725: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Feb 2 23:12:50.725: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.725: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Feb 2 23:12:50.725: INFO: Checking APIGroup: node.k8s.io Feb 2 23:12:50.726: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Feb 2 23:12:50.726: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.726: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Feb 2 23:12:50.726: INFO: Checking APIGroup: discovery.k8s.io Feb 2 23:12:50.727: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Feb 2 23:12:50.727: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.727: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Feb 2 23:12:50.727: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Feb 2 23:12:50.728: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Feb 2 23:12:50.728: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Feb 2 23:12:50.728: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Feb 2 23:12:50.728: INFO: Checking APIGroup: pingcap.com Feb 2 23:12:50.729: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Feb 2 23:12:50.729: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Feb 2 23:12:50.729: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:12:50.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8750" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":309,"completed":126,"skipped":2124,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:12:50.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Feb 2 23:12:50.790: INFO: >>> kubeConfig: /root/.kube/config Feb 2 23:12:54.354: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:13:08.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2410" for this suite. • [SLOW TEST:17.538 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":309,"completed":127,"skipped":2125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:13:08.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 23:13:08.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517" in namespace "downward-api-7039" to be "Succeeded or Failed" Feb 2 23:13:08.434: INFO: Pod "downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517": Phase="Pending", Reason="", readiness=false. Elapsed: 26.241864ms Feb 2 23:13:10.438: INFO: Pod "downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030403822s Feb 2 23:13:12.442: INFO: Pod "downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517": Phase="Running", Reason="", readiness=true. Elapsed: 4.034461633s Feb 2 23:13:14.448: INFO: Pod "downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041143005s STEP: Saw pod success Feb 2 23:13:14.449: INFO: Pod "downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517" satisfied condition "Succeeded or Failed" Feb 2 23:13:14.451: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517 container client-container: STEP: delete the pod Feb 2 23:13:14.539: INFO: Waiting for pod downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517 to disappear Feb 2 23:13:14.568: INFO: Pod downwardapi-volume-62fd20ba-95a0-46b9-81f5-f41bb0227517 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:13:14.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7039" for this suite. • [SLOW TEST:6.301 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":128,"skipped":2213,"failed":0} [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:13:14.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service endpoint-test2 in namespace services-791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-791 to expose endpoints map[] Feb 2 23:13:14.796: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Feb 2 23:13:15.805: INFO: successfully validated that service endpoint-test2 in namespace services-791 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-791 to expose endpoints map[pod1:[80]] Feb 2 23:13:19.859: INFO: successfully validated that service endpoint-test2 in namespace services-791 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-791 to expose endpoints map[pod1:[80] pod2:[80]] Feb 2 23:13:23.086: INFO: successfully validated that service endpoint-test2 in namespace services-791 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-791 to expose endpoints map[pod2:[80]] Feb 2 23:13:24.789: INFO: successfully validated that service endpoint-test2 in namespace services-791 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-791 to expose endpoints map[] Feb 2 23:13:25.543: INFO: successfully validated that service endpoint-test2 in namespace services-791 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:13:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-791" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:11.387 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":309,"completed":129,"skipped":2213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:13:25.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:13:26.957: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 2 23:13:26.998: INFO: Number of nodes with available pods: 0 Feb 2 23:13:26.998: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 2 23:13:27.747: INFO: Number of nodes with available pods: 0 Feb 2 23:13:27.747: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:28.750: INFO: Number of nodes with available pods: 0 Feb 2 23:13:28.750: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:30.324: INFO: Number of nodes with available pods: 0 Feb 2 23:13:30.324: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:30.750: INFO: Number of nodes with available pods: 0 Feb 2 23:13:30.750: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:32.210: INFO: Number of nodes with available pods: 0 Feb 2 23:13:32.210: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:32.994: INFO: Number of nodes with available pods: 1 Feb 2 23:13:32.994: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 2 23:13:34.479: INFO: Number of nodes with available pods: 1 Feb 2 23:13:34.479: INFO: Number of running nodes: 0, number of available pods: 1 Feb 2 23:13:35.779: INFO: Number of nodes with available pods: 0 Feb 2 23:13:35.779: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 2 23:13:35.809: INFO: Number of nodes with available pods: 0 Feb 2 23:13:35.809: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:36.814: INFO: Number of nodes with available pods: 0 Feb 2 23:13:36.814: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:37.818: INFO: Number of nodes with available pods: 0 Feb 2 23:13:37.818: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:38.812: INFO: Number of nodes with available pods: 0 Feb 2 23:13:38.812: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:39.812: INFO: Number of nodes with available pods: 0 Feb 2 23:13:39.812: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:40.813: INFO: Number of nodes with available pods: 0 Feb 2 23:13:40.813: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:41.954: INFO: Number of nodes with available pods: 0 Feb 2 23:13:41.954: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:42.813: INFO: Number of nodes with available pods: 0 Feb 2 23:13:42.813: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:43.813: INFO: Number of nodes with available pods: 0 Feb 2 23:13:43.813: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:13:44.814: INFO: Number of nodes with available pods: 1 Feb 2 23:13:44.814: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3184, will wait for the garbage collector to delete the pods Feb 2 23:13:44.879: INFO: Deleting DaemonSet.extensions daemon-set took: 6.96755ms Feb 2 23:13:45.479: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.329799ms Feb 2 23:14:10.183: INFO: Number of nodes with available pods: 0 Feb 2 23:14:10.183: INFO: Number of running nodes: 0, number of available pods: 0 Feb 2 23:14:10.186: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4179632"},"items":null} Feb 2 23:14:10.193: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4179633"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:14:10.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3184" for this suite. • [SLOW TEST:44.270 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":309,"completed":130,"skipped":2251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:14:10.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-5637 Feb 2 23:14:14.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5637 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Feb 2 23:14:14.594: INFO: stderr: "I0202 23:14:14.475715 1594 log.go:181] (0xc000d0d130) (0xc0008d4960) Create stream\nI0202 23:14:14.475764 1594 log.go:181] (0xc000d0d130) (0xc0008d4960) Stream added, broadcasting: 1\nI0202 23:14:14.480715 1594 log.go:181] (0xc000d0d130) Reply frame received for 1\nI0202 23:14:14.480784 1594 log.go:181] (0xc000d0d130) (0xc000c08000) Create stream\nI0202 23:14:14.480818 1594 log.go:181] (0xc000d0d130) (0xc000c08000) Stream added, broadcasting: 3\nI0202 23:14:14.481835 1594 log.go:181] (0xc000d0d130) Reply frame received for 3\nI0202 23:14:14.481880 1594 log.go:181] (0xc000d0d130) (0xc000c080a0) Create stream\nI0202 23:14:14.481891 1594 log.go:181] (0xc000d0d130) (0xc000c080a0) Stream added, broadcasting: 5\nI0202 23:14:14.482887 1594 log.go:181] (0xc000d0d130) Reply frame received for 5\nI0202 23:14:14.579861 1594 log.go:181] (0xc000d0d130) Data frame received for 5\nI0202 23:14:14.579889 1594 log.go:181] (0xc000c080a0) (5) Data frame handling\nI0202 23:14:14.579907 1594 log.go:181] (0xc000c080a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0202 23:14:14.585445 1594 log.go:181] (0xc000d0d130) Data frame received for 3\nI0202 23:14:14.585472 1594 log.go:181] (0xc000c08000) (3) Data frame handling\nI0202 23:14:14.585493 1594 log.go:181] (0xc000c08000) (3) Data frame sent\nI0202 23:14:14.585795 1594 log.go:181] (0xc000d0d130) Data frame received for 5\nI0202 23:14:14.585815 1594 log.go:181] (0xc000c080a0) (5) Data frame handling\nI0202 23:14:14.586183 1594 log.go:181] (0xc000d0d130) Data frame received for 3\nI0202 23:14:14.586208 1594 log.go:181] (0xc000c08000) (3) Data frame handling\nI0202 23:14:14.588198 1594 log.go:181] (0xc000d0d130) Data frame received for 1\nI0202 23:14:14.588222 1594 log.go:181] (0xc0008d4960) (1) Data frame handling\nI0202 23:14:14.588232 1594 log.go:181] (0xc0008d4960) (1) Data frame sent\nI0202 23:14:14.588249 1594 log.go:181] (0xc000d0d130) (0xc0008d4960) Stream removed, broadcasting: 1\nI0202 23:14:14.588272 1594 log.go:181] (0xc000d0d130) Go away received\nI0202 23:14:14.588584 1594 log.go:181] (0xc000d0d130) (0xc0008d4960) Stream removed, broadcasting: 1\nI0202 23:14:14.588604 1594 log.go:181] (0xc000d0d130) (0xc000c08000) Stream removed, broadcasting: 3\nI0202 23:14:14.588621 1594 log.go:181] (0xc000d0d130) (0xc000c080a0) Stream removed, broadcasting: 5\n" Feb 2 23:14:14.594: INFO: stdout: "iptables" Feb 2 23:14:14.594: INFO: proxyMode: iptables Feb 2 23:14:14.653: INFO: Waiting for pod kube-proxy-mode-detector to disappear Feb 2 23:14:14.657: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-5637 STEP: creating replication controller affinity-clusterip-timeout in namespace services-5637 I0202 23:14:14.723000 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-5637, replica count: 3 I0202 23:14:17.773393 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:14:20.773621 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:14:20.779: INFO: Creating new exec pod Feb 2 23:14:25.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5637 exec execpod-affinityh6vs4 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Feb 2 23:14:26.038: INFO: stderr: "I0202 23:14:25.945478 1612 log.go:181] (0xc00003b1e0) (0xc000c1e3c0) Create stream\nI0202 23:14:25.945538 1612 log.go:181] (0xc00003b1e0) (0xc000c1e3c0) Stream added, broadcasting: 1\nI0202 23:14:25.947332 1612 log.go:181] (0xc00003b1e0) Reply frame received for 1\nI0202 23:14:25.947380 1612 log.go:181] (0xc00003b1e0) (0xc0005f20a0) Create stream\nI0202 23:14:25.947392 1612 log.go:181] (0xc00003b1e0) (0xc0005f20a0) Stream added, broadcasting: 3\nI0202 23:14:25.948189 1612 log.go:181] (0xc00003b1e0) Reply frame received for 3\nI0202 23:14:25.948231 1612 log.go:181] (0xc00003b1e0) (0xc0003b8dc0) Create stream\nI0202 23:14:25.948244 1612 log.go:181] (0xc00003b1e0) (0xc0003b8dc0) Stream added, broadcasting: 5\nI0202 23:14:25.949699 1612 log.go:181] (0xc00003b1e0) Reply frame received for 5\nI0202 23:14:26.026220 1612 log.go:181] (0xc00003b1e0) Data frame received for 5\nI0202 23:14:26.026252 1612 log.go:181] (0xc0003b8dc0) (5) Data frame handling\nI0202 23:14:26.026280 1612 log.go:181] (0xc0003b8dc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0202 23:14:26.031815 1612 log.go:181] (0xc00003b1e0) Data frame received for 5\nI0202 23:14:26.031863 1612 log.go:181] (0xc0003b8dc0) (5) Data frame handling\nI0202 23:14:26.031881 1612 log.go:181] (0xc0003b8dc0) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0202 23:14:26.032068 1612 log.go:181] (0xc00003b1e0) Data frame received for 5\nI0202 23:14:26.032087 1612 log.go:181] (0xc0003b8dc0) (5) Data frame handling\nI0202 23:14:26.032098 1612 log.go:181] (0xc00003b1e0) Data frame received for 3\nI0202 23:14:26.032109 1612 log.go:181] (0xc0005f20a0) (3) Data frame handling\nI0202 23:14:26.033906 1612 log.go:181] (0xc00003b1e0) Data frame received for 1\nI0202 23:14:26.033941 1612 log.go:181] (0xc000c1e3c0) (1) Data frame handling\nI0202 23:14:26.033990 1612 log.go:181] (0xc000c1e3c0) (1) Data frame sent\nI0202 23:14:26.034008 1612 log.go:181] (0xc00003b1e0) (0xc000c1e3c0) Stream removed, broadcasting: 1\nI0202 23:14:26.034024 1612 log.go:181] (0xc00003b1e0) Go away received\nI0202 23:14:26.034412 1612 log.go:181] (0xc00003b1e0) (0xc000c1e3c0) Stream removed, broadcasting: 1\nI0202 23:14:26.034425 1612 log.go:181] (0xc00003b1e0) (0xc0005f20a0) Stream removed, broadcasting: 3\nI0202 23:14:26.034430 1612 log.go:181] (0xc00003b1e0) (0xc0003b8dc0) Stream removed, broadcasting: 5\n" Feb 2 23:14:26.038: INFO: stdout: "" Feb 2 23:14:26.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5637 exec execpod-affinityh6vs4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.169.221 80' Feb 2 23:14:26.259: INFO: stderr: "I0202 23:14:26.179741 1631 log.go:181] (0xc000a24210) (0xc000a1c500) Create stream\nI0202 23:14:26.179800 1631 log.go:181] (0xc000a24210) (0xc000a1c500) Stream added, broadcasting: 1\nI0202 23:14:26.182429 1631 log.go:181] (0xc000a24210) Reply frame received for 1\nI0202 23:14:26.182483 1631 log.go:181] (0xc000a24210) (0xc000552000) Create stream\nI0202 23:14:26.182517 1631 log.go:181] (0xc000a24210) (0xc000552000) Stream added, broadcasting: 3\nI0202 23:14:26.183417 1631 log.go:181] (0xc000a24210) Reply frame received for 3\nI0202 23:14:26.183438 1631 log.go:181] (0xc000a24210) (0xc00089a320) Create stream\nI0202 23:14:26.183444 1631 log.go:181] (0xc000a24210) (0xc00089a320) Stream added, broadcasting: 5\nI0202 23:14:26.184413 1631 log.go:181] (0xc000a24210) Reply frame received for 5\nI0202 23:14:26.252236 1631 log.go:181] (0xc000a24210) Data frame received for 3\nI0202 23:14:26.252272 1631 log.go:181] (0xc000552000) (3) Data frame handling\nI0202 23:14:26.252300 1631 log.go:181] (0xc000a24210) Data frame received for 5\nI0202 23:14:26.252322 1631 log.go:181] (0xc00089a320) (5) Data frame handling\nI0202 23:14:26.252342 1631 log.go:181] (0xc00089a320) (5) Data frame sent\nI0202 23:14:26.252353 1631 log.go:181] (0xc000a24210) Data frame received for 5\nI0202 23:14:26.252363 1631 log.go:181] (0xc00089a320) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.169.221 80\nConnection to 10.96.169.221 80 port [tcp/http] succeeded!\nI0202 23:14:26.253312 1631 log.go:181] (0xc000a24210) Data frame received for 1\nI0202 23:14:26.253341 1631 log.go:181] (0xc000a1c500) (1) Data frame handling\nI0202 23:14:26.253356 1631 log.go:181] (0xc000a1c500) (1) Data frame sent\nI0202 23:14:26.253368 1631 log.go:181] (0xc000a24210) (0xc000a1c500) Stream removed, broadcasting: 1\nI0202 23:14:26.253380 1631 log.go:181] (0xc000a24210) Go away received\nI0202 23:14:26.253664 1631 log.go:181] (0xc000a24210) (0xc000a1c500) Stream removed, broadcasting: 1\nI0202 23:14:26.253677 1631 log.go:181] (0xc000a24210) (0xc000552000) Stream removed, broadcasting: 3\nI0202 23:14:26.253684 1631 log.go:181] (0xc000a24210) (0xc00089a320) Stream removed, broadcasting: 5\n" Feb 2 23:14:26.259: INFO: stdout: "" Feb 2 23:14:26.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5637 exec execpod-affinityh6vs4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.169.221:80/ ; done' Feb 2 23:14:26.569: INFO: stderr: "I0202 23:14:26.397164 1647 log.go:181] (0xc00003a420) (0xc0008881e0) Create stream\nI0202 23:14:26.397268 1647 log.go:181] (0xc00003a420) (0xc0008881e0) Stream added, broadcasting: 1\nI0202 23:14:26.402740 1647 log.go:181] (0xc00003a420) Reply frame received for 1\nI0202 23:14:26.402798 1647 log.go:181] (0xc00003a420) (0xc0008941e0) Create stream\nI0202 23:14:26.402819 1647 log.go:181] (0xc00003a420) (0xc0008941e0) Stream added, broadcasting: 3\nI0202 23:14:26.403912 1647 log.go:181] (0xc00003a420) Reply frame received for 3\nI0202 23:14:26.403953 1647 log.go:181] (0xc00003a420) (0xc000889220) Create stream\nI0202 23:14:26.403963 1647 log.go:181] (0xc00003a420) (0xc000889220) Stream added, broadcasting: 5\nI0202 23:14:26.404812 1647 log.go:181] (0xc00003a420) Reply frame received for 5\nI0202 23:14:26.457864 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.457902 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.457917 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.457938 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.457948 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.457960 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.464558 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.464601 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.464634 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.465665 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.465695 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.465709 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.465734 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.465752 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.465765 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.471056 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.471084 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.471097 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.472096 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.472128 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.472154 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.472192 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.472205 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.472223 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.475796 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.475839 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.475880 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.476737 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.476771 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.476800 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.476813 1647 log.go:181] (0xc000889220) (5) Data frame sent\nI0202 23:14:26.476824 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.476965 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.477000 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.477028 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.477072 1647 log.go:181] (0xc000889220) (5) Data frame sent\nI0202 23:14:26.483423 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.483446 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.483465 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.484280 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.484303 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.484316 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0202 23:14:26.484343 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.484370 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.484380 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.484400 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.484407 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.484415 1647 log.go:181] (0xc000889220) (5) Data frame sent\n 2 http://10.96.169.221:80/\nI0202 23:14:26.490159 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.490179 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.490195 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.491181 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.491218 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.491233 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.491254 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.491273 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.491285 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.496584 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.496612 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.496633 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.497771 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.497801 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.497814 1647 log.go:181] (0xc000889220) (5) Data frame sent\nI0202 23:14:26.497823 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.497832 1647 log.go:181] (0xc000889220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.497854 1647 log.go:181] (0xc000889220) (5) Data frame sent\nI0202 23:14:26.497868 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.497888 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.497904 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.505433 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.505472 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.505492 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.505667 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.505681 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.505687 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.505695 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.505716 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.505731 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.510847 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.510870 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.510896 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.511178 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.511205 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.511211 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.511247 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.511281 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.511316 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.517362 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.517376 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.517384 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.517951 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.517989 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.518001 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.518015 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.518025 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.518044 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.522728 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.522781 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.522828 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.523250 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.523274 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.523292 1647 log.go:181] (0xc000889220) (5) Data frame sent\nI0202 23:14:26.523305 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.523318 1647 log.go:181] (0xc000889220) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.523352 1647 log.go:181] (0xc000889220) (5) Data frame sent\nI0202 23:14:26.523371 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.523391 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.523411 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.530589 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.530614 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.530639 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.531280 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.531305 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.531312 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.531334 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.531378 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.531399 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.536942 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.536961 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.536977 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.537593 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.537639 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.537670 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.537698 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.537713 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.537723 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.541414 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.541439 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.541463 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.541563 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.541575 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.541589 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.541619 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.541639 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.541663 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.546317 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.546329 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.546335 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.547051 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.547076 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.547084 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.547096 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.547102 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.547108 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.552579 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.552595 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.552609 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.553128 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.553144 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.553155 1647 log.go:181] (0xc000889220) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.553303 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.553318 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.553333 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.558488 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.558506 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.558515 1647 log.go:181] (0xc0008941e0) (3) Data frame sent\nI0202 23:14:26.559662 1647 log.go:181] (0xc00003a420) Data frame received for 3\nI0202 23:14:26.559681 1647 log.go:181] (0xc0008941e0) (3) Data frame handling\nI0202 23:14:26.559709 1647 log.go:181] (0xc00003a420) Data frame received for 5\nI0202 23:14:26.559726 1647 log.go:181] (0xc000889220) (5) Data frame handling\nI0202 23:14:26.561940 1647 log.go:181] (0xc00003a420) Data frame received for 1\nI0202 23:14:26.561966 1647 log.go:181] (0xc0008881e0) (1) Data frame handling\nI0202 23:14:26.561994 1647 log.go:181] (0xc0008881e0) (1) Data frame sent\nI0202 23:14:26.562017 1647 log.go:181] (0xc00003a420) (0xc0008881e0) Stream removed, broadcasting: 1\nI0202 23:14:26.562031 1647 log.go:181] (0xc00003a420) Go away received\nI0202 23:14:26.562444 1647 log.go:181] (0xc00003a420) (0xc0008881e0) Stream removed, broadcasting: 1\nI0202 23:14:26.562463 1647 log.go:181] (0xc00003a420) (0xc0008941e0) Stream removed, broadcasting: 3\nI0202 23:14:26.562472 1647 log.go:181] (0xc00003a420) (0xc000889220) Stream removed, broadcasting: 5\n" Feb 2 23:14:26.569: INFO: stdout: "\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht\naffinity-clusterip-timeout-4zdht" Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Received response from host: affinity-clusterip-timeout-4zdht Feb 2 23:14:26.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5637 exec execpod-affinityh6vs4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.169.221:80/' Feb 2 23:14:26.779: INFO: stderr: "I0202 23:14:26.708616 1664 log.go:181] (0xc000b19760) (0xc000934b40) Create stream\nI0202 23:14:26.708688 1664 log.go:181] (0xc000b19760) (0xc000934b40) Stream added, broadcasting: 1\nI0202 23:14:26.711169 1664 log.go:181] (0xc000b19760) Reply frame received for 1\nI0202 23:14:26.711208 1664 log.go:181] (0xc000b19760) (0xc00051e280) Create stream\nI0202 23:14:26.711220 1664 log.go:181] (0xc000b19760) (0xc00051e280) Stream added, broadcasting: 3\nI0202 23:14:26.712135 1664 log.go:181] (0xc000b19760) Reply frame received for 3\nI0202 23:14:26.712171 1664 log.go:181] (0xc000b19760) (0xc00051e320) Create stream\nI0202 23:14:26.712181 1664 log.go:181] (0xc000b19760) (0xc00051e320) Stream added, broadcasting: 5\nI0202 23:14:26.713461 1664 log.go:181] (0xc000b19760) Reply frame received for 5\nI0202 23:14:26.766853 1664 log.go:181] (0xc000b19760) Data frame received for 5\nI0202 23:14:26.766876 1664 log.go:181] (0xc00051e320) (5) Data frame handling\nI0202 23:14:26.766887 1664 log.go:181] (0xc00051e320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:26.771356 1664 log.go:181] (0xc000b19760) Data frame received for 3\nI0202 23:14:26.771369 1664 log.go:181] (0xc00051e280) (3) Data frame handling\nI0202 23:14:26.771381 1664 log.go:181] (0xc00051e280) (3) Data frame sent\nI0202 23:14:26.771686 1664 log.go:181] (0xc000b19760) Data frame received for 5\nI0202 23:14:26.771711 1664 log.go:181] (0xc00051e320) (5) Data frame handling\nI0202 23:14:26.771885 1664 log.go:181] (0xc000b19760) Data frame received for 3\nI0202 23:14:26.771903 1664 log.go:181] (0xc00051e280) (3) Data frame handling\nI0202 23:14:26.773414 1664 log.go:181] (0xc000b19760) Data frame received for 1\nI0202 23:14:26.773428 1664 log.go:181] (0xc000934b40) (1) Data frame handling\nI0202 23:14:26.773442 1664 log.go:181] (0xc000934b40) (1) Data frame sent\nI0202 23:14:26.773492 1664 log.go:181] (0xc000b19760) (0xc000934b40) Stream removed, broadcasting: 1\nI0202 23:14:26.773533 1664 log.go:181] (0xc000b19760) Go away received\nI0202 23:14:26.773767 1664 log.go:181] (0xc000b19760) (0xc000934b40) Stream removed, broadcasting: 1\nI0202 23:14:26.773779 1664 log.go:181] (0xc000b19760) (0xc00051e280) Stream removed, broadcasting: 3\nI0202 23:14:26.773789 1664 log.go:181] (0xc000b19760) (0xc00051e320) Stream removed, broadcasting: 5\n" Feb 2 23:14:26.779: INFO: stdout: "affinity-clusterip-timeout-4zdht" Feb 2 23:14:46.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5637 exec execpod-affinityh6vs4 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.169.221:80/' Feb 2 23:14:47.043: INFO: stderr: "I0202 23:14:46.924371 1682 log.go:181] (0xc000e25290) (0xc000474500) Create stream\nI0202 23:14:46.924480 1682 log.go:181] (0xc000e25290) (0xc000474500) Stream added, broadcasting: 1\nI0202 23:14:46.932404 1682 log.go:181] (0xc000e25290) Reply frame received for 1\nI0202 23:14:46.932471 1682 log.go:181] (0xc000e25290) (0xc000475040) Create stream\nI0202 23:14:46.932484 1682 log.go:181] (0xc000e25290) (0xc000475040) Stream added, broadcasting: 3\nI0202 23:14:46.933714 1682 log.go:181] (0xc000e25290) Reply frame received for 3\nI0202 23:14:46.933762 1682 log.go:181] (0xc000e25290) (0xc0004757c0) Create stream\nI0202 23:14:46.933795 1682 log.go:181] (0xc000e25290) (0xc0004757c0) Stream added, broadcasting: 5\nI0202 23:14:46.944346 1682 log.go:181] (0xc000e25290) Reply frame received for 5\nI0202 23:14:47.030385 1682 log.go:181] (0xc000e25290) Data frame received for 5\nI0202 23:14:47.030429 1682 log.go:181] (0xc0004757c0) (5) Data frame handling\nI0202 23:14:47.030459 1682 log.go:181] (0xc0004757c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.169.221:80/\nI0202 23:14:47.035003 1682 log.go:181] (0xc000e25290) Data frame received for 3\nI0202 23:14:47.035044 1682 log.go:181] (0xc000475040) (3) Data frame handling\nI0202 23:14:47.035069 1682 log.go:181] (0xc000475040) (3) Data frame sent\nI0202 23:14:47.035722 1682 log.go:181] (0xc000e25290) Data frame received for 3\nI0202 23:14:47.035756 1682 log.go:181] (0xc000475040) (3) Data frame handling\nI0202 23:14:47.035867 1682 log.go:181] (0xc000e25290) Data frame received for 5\nI0202 23:14:47.035885 1682 log.go:181] (0xc0004757c0) (5) Data frame handling\nI0202 23:14:47.037983 1682 log.go:181] (0xc000e25290) Data frame received for 1\nI0202 23:14:47.038024 1682 log.go:181] (0xc000474500) (1) Data frame handling\nI0202 23:14:47.038057 1682 log.go:181] (0xc000474500) (1) Data frame sent\nI0202 23:14:47.038081 1682 log.go:181] (0xc000e25290) (0xc000474500) Stream removed, broadcasting: 1\nI0202 23:14:47.038104 1682 log.go:181] (0xc000e25290) Go away received\nI0202 23:14:47.038459 1682 log.go:181] (0xc000e25290) (0xc000474500) Stream removed, broadcasting: 1\nI0202 23:14:47.038477 1682 log.go:181] (0xc000e25290) (0xc000475040) Stream removed, broadcasting: 3\nI0202 23:14:47.038484 1682 log.go:181] (0xc000e25290) (0xc0004757c0) Stream removed, broadcasting: 5\n" Feb 2 23:14:47.044: INFO: stdout: "affinity-clusterip-timeout-fj6kt" Feb 2 23:14:47.044: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-5637, will wait for the garbage collector to delete the pods Feb 2 23:14:47.157: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 16.750551ms Feb 2 23:14:47.757: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.199822ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:15:10.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5637" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:60.029 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":131,"skipped":2303,"failed":0} [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:15:10.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 2 23:15:14.937: INFO: Successfully updated pod "pod-update-f1531c24-61cc-46a9-9606-989c56a1f2a2" STEP: verifying the updated pod is in kubernetes Feb 2 23:15:14.955: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:15:14.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3620" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":309,"completed":132,"skipped":2303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:15:14.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Feb 2 23:15:21.587: INFO: Successfully updated pod "adopt-release-2ln4r" STEP: Checking that the Job readopts the Pod Feb 2 23:15:21.587: INFO: Waiting up to 15m0s for pod "adopt-release-2ln4r" in namespace "job-6983" to be "adopted" Feb 2 23:15:21.609: INFO: Pod "adopt-release-2ln4r": Phase="Running", Reason="", readiness=true. Elapsed: 22.367907ms Feb 2 23:15:23.613: INFO: Pod "adopt-release-2ln4r": Phase="Running", Reason="", readiness=true. Elapsed: 2.025980802s Feb 2 23:15:23.613: INFO: Pod "adopt-release-2ln4r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Feb 2 23:15:24.125: INFO: Successfully updated pod "adopt-release-2ln4r" STEP: Checking that the Job releases the Pod Feb 2 23:15:24.125: INFO: Waiting up to 15m0s for pod "adopt-release-2ln4r" in namespace "job-6983" to be "released" Feb 2 23:15:24.173: INFO: Pod "adopt-release-2ln4r": Phase="Running", Reason="", readiness=true. Elapsed: 47.234698ms Feb 2 23:15:26.177: INFO: Pod "adopt-release-2ln4r": Phase="Running", Reason="", readiness=true. Elapsed: 2.051642154s Feb 2 23:15:26.177: INFO: Pod "adopt-release-2ln4r" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:15:26.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6983" for this suite. • [SLOW TEST:11.223 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":309,"completed":133,"skipped":2332,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:15:26.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod with failed condition STEP: updating the pod Feb 2 23:17:27.030: INFO: Successfully updated pod "var-expansion-b52929d4-83ce-4ad9-8639-4ebecc39d49d" STEP: waiting for pod running STEP: deleting the pod gracefully Feb 2 23:17:29.055: INFO: Deleting pod "var-expansion-b52929d4-83ce-4ad9-8639-4ebecc39d49d" in namespace "var-expansion-9381" Feb 2 23:17:29.061: INFO: Wait up to 5m0s for pod "var-expansion-b52929d4-83ce-4ad9-8639-4ebecc39d49d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:18:11.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9381" for this suite. • [SLOW TEST:164.970 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":309,"completed":134,"skipped":2338,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:18:11.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 2 23:18:11.264: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:19:11.289: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Feb 2 23:19:11.341: INFO: Created pod: pod0-sched-preemption-low-priority Feb 2 23:19:11.372: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:19:35.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2601" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:84.364 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":309,"completed":135,"skipped":2343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:19:35.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-4164 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 2 23:19:35.578: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 2 23:19:35.660: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:19:37.665: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:19:39.664: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:19:41.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:19:43.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:19:45.665: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:19:47.665: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:19:49.664: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:19:51.664: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 2 23:19:53.664: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 2 23:19:53.671: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 2 23:19:55.676: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 2 23:19:57.675: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 2 23:20:01.702: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 2 23:20:01.702: INFO: Breadth first check of 10.244.2.57 on host 172.18.0.13... Feb 2 23:20:01.705: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.58:9080/dial?request=hostname&protocol=http&host=10.244.2.57&port=8080&tries=1'] Namespace:pod-network-test-4164 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:20:01.705: INFO: >>> kubeConfig: /root/.kube/config I0202 23:20:01.738691 7 log.go:181] (0xc003af26e0) (0xc00404d180) Create stream I0202 23:20:01.738727 7 log.go:181] (0xc003af26e0) (0xc00404d180) Stream added, broadcasting: 1 I0202 23:20:01.740534 7 log.go:181] (0xc003af26e0) Reply frame received for 1 I0202 23:20:01.740588 7 log.go:181] (0xc003af26e0) (0xc0037ae0a0) Create stream I0202 23:20:01.740605 7 log.go:181] (0xc003af26e0) (0xc0037ae0a0) Stream added, broadcasting: 3 I0202 23:20:01.741690 7 log.go:181] (0xc003af26e0) Reply frame received for 3 I0202 23:20:01.741720 7 log.go:181] (0xc003af26e0) (0xc00404d220) Create stream I0202 23:20:01.741730 7 log.go:181] (0xc003af26e0) (0xc00404d220) Stream added, broadcasting: 5 I0202 23:20:01.742538 7 log.go:181] (0xc003af26e0) Reply frame received for 5 I0202 23:20:01.841737 7 log.go:181] (0xc003af26e0) Data frame received for 3 I0202 23:20:01.841771 7 log.go:181] (0xc0037ae0a0) (3) Data frame handling I0202 23:20:01.841807 7 log.go:181] (0xc0037ae0a0) (3) Data frame sent I0202 23:20:01.842646 7 log.go:181] (0xc003af26e0) Data frame received for 5 I0202 23:20:01.842679 7 log.go:181] (0xc00404d220) (5) Data frame handling I0202 23:20:01.842709 7 log.go:181] (0xc003af26e0) Data frame received for 3 I0202 23:20:01.842730 7 log.go:181] (0xc0037ae0a0) (3) Data frame handling I0202 23:20:01.844663 7 log.go:181] (0xc003af26e0) Data frame received for 1 I0202 23:20:01.844682 7 log.go:181] (0xc00404d180) (1) Data frame handling I0202 23:20:01.844693 7 log.go:181] (0xc00404d180) (1) Data frame sent I0202 23:20:01.844713 7 log.go:181] (0xc003af26e0) (0xc00404d180) Stream removed, broadcasting: 1 I0202 23:20:01.844732 7 log.go:181] (0xc003af26e0) Go away received I0202 23:20:01.844782 7 log.go:181] (0xc003af26e0) (0xc00404d180) Stream removed, broadcasting: 1 I0202 23:20:01.844811 7 log.go:181] (0xc003af26e0) (0xc0037ae0a0) Stream removed, broadcasting: 3 I0202 23:20:01.844822 7 log.go:181] (0xc003af26e0) (0xc00404d220) Stream removed, broadcasting: 5 Feb 2 23:20:01.844: INFO: Waiting for responses: map[] Feb 2 23:20:01.844: INFO: reached 10.244.2.57 after 0/1 tries Feb 2 23:20:01.844: INFO: Breadth first check of 10.244.1.216 on host 172.18.0.12... Feb 2 23:20:01.848: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.58:9080/dial?request=hostname&protocol=http&host=10.244.1.216&port=8080&tries=1'] Namespace:pod-network-test-4164 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 2 23:20:01.848: INFO: >>> kubeConfig: /root/.kube/config I0202 23:20:01.873497 7 log.go:181] (0xc004451c30) (0xc0047bb360) Create stream I0202 23:20:01.873526 7 log.go:181] (0xc004451c30) (0xc0047bb360) Stream added, broadcasting: 1 I0202 23:20:01.875470 7 log.go:181] (0xc004451c30) Reply frame received for 1 I0202 23:20:01.875528 7 log.go:181] (0xc004451c30) (0xc0037ae1e0) Create stream I0202 23:20:01.875554 7 log.go:181] (0xc004451c30) (0xc0037ae1e0) Stream added, broadcasting: 3 I0202 23:20:01.876510 7 log.go:181] (0xc004451c30) Reply frame received for 3 I0202 23:20:01.876548 7 log.go:181] (0xc004451c30) (0xc001eddea0) Create stream I0202 23:20:01.876560 7 log.go:181] (0xc004451c30) (0xc001eddea0) Stream added, broadcasting: 5 I0202 23:20:01.877747 7 log.go:181] (0xc004451c30) Reply frame received for 5 I0202 23:20:01.955669 7 log.go:181] (0xc004451c30) Data frame received for 3 I0202 23:20:01.955691 7 log.go:181] (0xc0037ae1e0) (3) Data frame handling I0202 23:20:01.955704 7 log.go:181] (0xc0037ae1e0) (3) Data frame sent I0202 23:20:01.956200 7 log.go:181] (0xc004451c30) Data frame received for 3 I0202 23:20:01.956230 7 log.go:181] (0xc0037ae1e0) (3) Data frame handling I0202 23:20:01.956343 7 log.go:181] (0xc004451c30) Data frame received for 5 I0202 23:20:01.956355 7 log.go:181] (0xc001eddea0) (5) Data frame handling I0202 23:20:01.957763 7 log.go:181] (0xc004451c30) Data frame received for 1 I0202 23:20:01.957802 7 log.go:181] (0xc0047bb360) (1) Data frame handling I0202 23:20:01.957844 7 log.go:181] (0xc0047bb360) (1) Data frame sent I0202 23:20:01.957866 7 log.go:181] (0xc004451c30) (0xc0047bb360) Stream removed, broadcasting: 1 I0202 23:20:01.957890 7 log.go:181] (0xc004451c30) Go away received I0202 23:20:01.957972 7 log.go:181] (0xc004451c30) (0xc0047bb360) Stream removed, broadcasting: 1 I0202 23:20:01.957990 7 log.go:181] (0xc004451c30) (0xc0037ae1e0) Stream removed, broadcasting: 3 I0202 23:20:01.958006 7 log.go:181] (0xc004451c30) (0xc001eddea0) Stream removed, broadcasting: 5 Feb 2 23:20:01.958: INFO: Waiting for responses: map[] Feb 2 23:20:01.958: INFO: reached 10.244.1.216 after 0/1 tries Feb 2 23:20:01.958: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:20:01.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4164" for this suite. • [SLOW TEST:26.443 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":309,"completed":136,"skipped":2374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:20:01.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 2 23:20:06.731: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9b2d1bd2-b321-4bc1-b35b-5fb6e136409c" Feb 2 23:20:06.731: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9b2d1bd2-b321-4bc1-b35b-5fb6e136409c" in namespace "pods-143" to be "terminated due to deadline exceeded" Feb 2 23:20:06.739: INFO: Pod "pod-update-activedeadlineseconds-9b2d1bd2-b321-4bc1-b35b-5fb6e136409c": Phase="Running", Reason="", readiness=true. Elapsed: 8.055516ms Feb 2 23:20:08.743: INFO: Pod "pod-update-activedeadlineseconds-9b2d1bd2-b321-4bc1-b35b-5fb6e136409c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012238315s Feb 2 23:20:08.743: INFO: Pod "pod-update-activedeadlineseconds-9b2d1bd2-b321-4bc1-b35b-5fb6e136409c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:20:08.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-143" for this suite. • [SLOW TEST:6.783 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":309,"completed":137,"skipped":2413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:20:08.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4876 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating stateful set ss in namespace statefulset-4876 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4876 Feb 2 23:20:09.547: INFO: Found 0 stateful pods, waiting for 1 Feb 2 23:20:19.553: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 2 23:20:19.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 23:20:19.814: INFO: stderr: "I0202 23:20:19.695252 1700 log.go:181] (0xc00003a0b0) (0xc000c82000) Create stream\nI0202 23:20:19.695359 1700 log.go:181] (0xc00003a0b0) (0xc000c82000) Stream added, broadcasting: 1\nI0202 23:20:19.697291 1700 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0202 23:20:19.697327 1700 log.go:181] (0xc00003a0b0) (0xc000b943c0) Create stream\nI0202 23:20:19.697339 1700 log.go:181] (0xc00003a0b0) (0xc000b943c0) Stream added, broadcasting: 3\nI0202 23:20:19.698149 1700 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0202 23:20:19.698178 1700 log.go:181] (0xc00003a0b0) (0xc00087e500) Create stream\nI0202 23:20:19.698193 1700 log.go:181] (0xc00003a0b0) (0xc00087e500) Stream added, broadcasting: 5\nI0202 23:20:19.698933 1700 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0202 23:20:19.775353 1700 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:20:19.775389 1700 log.go:181] (0xc00087e500) (5) Data frame handling\nI0202 23:20:19.775409 1700 log.go:181] (0xc00087e500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0202 23:20:19.806131 1700 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:20:19.806155 1700 log.go:181] (0xc000b943c0) (3) Data frame handling\nI0202 23:20:19.806168 1700 log.go:181] (0xc000b943c0) (3) Data frame sent\nI0202 23:20:19.806408 1700 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:20:19.806458 1700 log.go:181] (0xc00087e500) (5) Data frame handling\nI0202 23:20:19.806480 1700 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:20:19.806496 1700 log.go:181] (0xc000b943c0) (3) Data frame handling\nI0202 23:20:19.808490 1700 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0202 23:20:19.808504 1700 log.go:181] (0xc000c82000) (1) Data frame handling\nI0202 23:20:19.808517 1700 log.go:181] (0xc000c82000) (1) Data frame sent\nI0202 23:20:19.808654 1700 log.go:181] (0xc00003a0b0) (0xc000c82000) Stream removed, broadcasting: 1\nI0202 23:20:19.808697 1700 log.go:181] (0xc00003a0b0) Go away received\nI0202 23:20:19.809147 1700 log.go:181] (0xc00003a0b0) (0xc000c82000) Stream removed, broadcasting: 1\nI0202 23:20:19.809162 1700 log.go:181] (0xc00003a0b0) (0xc000b943c0) Stream removed, broadcasting: 3\nI0202 23:20:19.809170 1700 log.go:181] (0xc00003a0b0) (0xc00087e500) Stream removed, broadcasting: 5\n" Feb 2 23:20:19.814: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 23:20:19.814: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 23:20:19.818: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 2 23:20:29.823: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 2 23:20:29.823: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 23:20:29.842: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:20:29.842: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:20:29.842: INFO: Feb 2 23:20:29.842: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 2 23:20:30.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992945529s Feb 2 23:20:32.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987829573s Feb 2 23:20:33.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.576903156s Feb 2 23:20:34.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.572623156s Feb 2 23:20:35.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.568178371s Feb 2 23:20:36.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.563521662s Feb 2 23:20:37.281: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.559114682s Feb 2 23:20:38.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.553928104s Feb 2 23:20:39.291: INFO: Verifying statefulset ss doesn't scale past 3 for another 548.913204ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4876 Feb 2 23:20:40.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:20:40.526: INFO: stderr: "I0202 23:20:40.424437 1718 log.go:181] (0xc0009cd3f0) (0xc000da48c0) Create stream\nI0202 23:20:40.424503 1718 log.go:181] (0xc0009cd3f0) (0xc000da48c0) Stream added, broadcasting: 1\nI0202 23:20:40.430120 1718 log.go:181] (0xc0009cd3f0) Reply frame received for 1\nI0202 23:20:40.430166 1718 log.go:181] (0xc0009cd3f0) (0xc000cea000) Create stream\nI0202 23:20:40.430180 1718 log.go:181] (0xc0009cd3f0) (0xc000cea000) Stream added, broadcasting: 3\nI0202 23:20:40.431004 1718 log.go:181] (0xc0009cd3f0) Reply frame received for 3\nI0202 23:20:40.431044 1718 log.go:181] (0xc0009cd3f0) (0xc000cea0a0) Create stream\nI0202 23:20:40.431053 1718 log.go:181] (0xc0009cd3f0) (0xc000cea0a0) Stream added, broadcasting: 5\nI0202 23:20:40.432031 1718 log.go:181] (0xc0009cd3f0) Reply frame received for 5\nI0202 23:20:40.519042 1718 log.go:181] (0xc0009cd3f0) Data frame received for 3\nI0202 23:20:40.519076 1718 log.go:181] (0xc000cea000) (3) Data frame handling\nI0202 23:20:40.519085 1718 log.go:181] (0xc000cea000) (3) Data frame sent\nI0202 23:20:40.519091 1718 log.go:181] (0xc0009cd3f0) Data frame received for 3\nI0202 23:20:40.519096 1718 log.go:181] (0xc000cea000) (3) Data frame handling\nI0202 23:20:40.519161 1718 log.go:181] (0xc0009cd3f0) Data frame received for 5\nI0202 23:20:40.519190 1718 log.go:181] (0xc000cea0a0) (5) Data frame handling\nI0202 23:20:40.519210 1718 log.go:181] (0xc000cea0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0202 23:20:40.519240 1718 log.go:181] (0xc0009cd3f0) Data frame received for 5\nI0202 23:20:40.519255 1718 log.go:181] (0xc000cea0a0) (5) Data frame handling\nI0202 23:20:40.520799 1718 log.go:181] (0xc0009cd3f0) Data frame received for 1\nI0202 23:20:40.520826 1718 log.go:181] (0xc000da48c0) (1) Data frame handling\nI0202 23:20:40.520961 1718 log.go:181] (0xc000da48c0) (1) Data frame sent\nI0202 23:20:40.520986 1718 log.go:181] (0xc0009cd3f0) (0xc000da48c0) Stream removed, broadcasting: 1\nI0202 23:20:40.521006 1718 log.go:181] (0xc0009cd3f0) Go away received\nI0202 23:20:40.521302 1718 log.go:181] (0xc0009cd3f0) (0xc000da48c0) Stream removed, broadcasting: 1\nI0202 23:20:40.521318 1718 log.go:181] (0xc0009cd3f0) (0xc000cea000) Stream removed, broadcasting: 3\nI0202 23:20:40.521325 1718 log.go:181] (0xc0009cd3f0) (0xc000cea0a0) Stream removed, broadcasting: 5\n" Feb 2 23:20:40.527: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 23:20:40.527: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 23:20:40.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:20:40.750: INFO: stderr: "I0202 23:20:40.656589 1736 log.go:181] (0xc00023a210) (0xc0000cc460) Create stream\nI0202 23:20:40.656662 1736 log.go:181] (0xc00023a210) (0xc0000cc460) Stream added, broadcasting: 1\nI0202 23:20:40.660800 1736 log.go:181] (0xc00023a210) Reply frame received for 1\nI0202 23:20:40.661089 1736 log.go:181] (0xc00023a210) (0xc00019f220) Create stream\nI0202 23:20:40.661181 1736 log.go:181] (0xc00023a210) (0xc00019f220) Stream added, broadcasting: 3\nI0202 23:20:40.666076 1736 log.go:181] (0xc00023a210) Reply frame received for 3\nI0202 23:20:40.666118 1736 log.go:181] (0xc00023a210) (0xc0000cdae0) Create stream\nI0202 23:20:40.666134 1736 log.go:181] (0xc00023a210) (0xc0000cdae0) Stream added, broadcasting: 5\nI0202 23:20:40.667134 1736 log.go:181] (0xc00023a210) Reply frame received for 5\nI0202 23:20:40.736776 1736 log.go:181] (0xc00023a210) Data frame received for 5\nI0202 23:20:40.736811 1736 log.go:181] (0xc0000cdae0) (5) Data frame handling\nI0202 23:20:40.736938 1736 log.go:181] (0xc0000cdae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0202 23:20:40.741513 1736 log.go:181] (0xc00023a210) Data frame received for 5\nI0202 23:20:40.741535 1736 log.go:181] (0xc0000cdae0) (5) Data frame handling\nI0202 23:20:40.741545 1736 log.go:181] (0xc0000cdae0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0202 23:20:40.741625 1736 log.go:181] (0xc00023a210) Data frame received for 5\nI0202 23:20:40.741673 1736 log.go:181] (0xc0000cdae0) (5) Data frame handling\nI0202 23:20:40.741707 1736 log.go:181] (0xc0000cdae0) (5) Data frame sent\nI0202 23:20:40.741730 1736 log.go:181] (0xc00023a210) Data frame received for 5\n+ true\nI0202 23:20:40.741743 1736 log.go:181] (0xc0000cdae0) (5) Data frame handling\nI0202 23:20:40.741807 1736 log.go:181] (0xc00023a210) Data frame received for 3\nI0202 23:20:40.741846 1736 log.go:181] (0xc00019f220) (3) Data frame handling\nI0202 23:20:40.741878 1736 log.go:181] (0xc00019f220) (3) Data frame sent\nI0202 23:20:40.741917 1736 log.go:181] (0xc00023a210) Data frame received for 3\nI0202 23:20:40.741931 1736 log.go:181] (0xc00019f220) (3) Data frame handling\nI0202 23:20:40.743769 1736 log.go:181] (0xc00023a210) Data frame received for 1\nI0202 23:20:40.743785 1736 log.go:181] (0xc0000cc460) (1) Data frame handling\nI0202 23:20:40.743800 1736 log.go:181] (0xc0000cc460) (1) Data frame sent\nI0202 23:20:40.743810 1736 log.go:181] (0xc00023a210) (0xc0000cc460) Stream removed, broadcasting: 1\nI0202 23:20:40.743824 1736 log.go:181] (0xc00023a210) Go away received\nI0202 23:20:40.744150 1736 log.go:181] (0xc00023a210) (0xc0000cc460) Stream removed, broadcasting: 1\nI0202 23:20:40.744169 1736 log.go:181] (0xc00023a210) (0xc00019f220) Stream removed, broadcasting: 3\nI0202 23:20:40.744178 1736 log.go:181] (0xc00023a210) (0xc0000cdae0) Stream removed, broadcasting: 5\n" Feb 2 23:20:40.750: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 23:20:40.750: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 23:20:40.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:20:40.963: INFO: stderr: "I0202 23:20:40.878552 1754 log.go:181] (0xc000168000) (0xc000586460) Create stream\nI0202 23:20:40.878631 1754 log.go:181] (0xc000168000) (0xc000586460) Stream added, broadcasting: 1\nI0202 23:20:40.880564 1754 log.go:181] (0xc000168000) Reply frame received for 1\nI0202 23:20:40.880608 1754 log.go:181] (0xc000168000) (0xc0005870e0) Create stream\nI0202 23:20:40.880638 1754 log.go:181] (0xc000168000) (0xc0005870e0) Stream added, broadcasting: 3\nI0202 23:20:40.881758 1754 log.go:181] (0xc000168000) Reply frame received for 3\nI0202 23:20:40.881818 1754 log.go:181] (0xc000168000) (0xc000d90000) Create stream\nI0202 23:20:40.881857 1754 log.go:181] (0xc000168000) (0xc000d90000) Stream added, broadcasting: 5\nI0202 23:20:40.882799 1754 log.go:181] (0xc000168000) Reply frame received for 5\nI0202 23:20:40.950307 1754 log.go:181] (0xc000168000) Data frame received for 5\nI0202 23:20:40.950344 1754 log.go:181] (0xc000d90000) (5) Data frame handling\nI0202 23:20:40.950362 1754 log.go:181] (0xc000d90000) (5) Data frame sent\nI0202 23:20:40.950378 1754 log.go:181] (0xc000168000) Data frame received for 3\nI0202 23:20:40.950392 1754 log.go:181] (0xc0005870e0) (3) Data frame handling\nI0202 23:20:40.950403 1754 log.go:181] (0xc0005870e0) (3) Data frame sent\nI0202 23:20:40.950414 1754 log.go:181] (0xc000168000) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0202 23:20:40.950429 1754 log.go:181] (0xc0005870e0) (3) Data frame handling\nI0202 23:20:40.950494 1754 log.go:181] (0xc000168000) Data frame received for 5\nI0202 23:20:40.950523 1754 log.go:181] (0xc000d90000) (5) Data frame handling\nI0202 23:20:40.957807 1754 log.go:181] (0xc000168000) Data frame received for 1\nI0202 23:20:40.957841 1754 log.go:181] (0xc000586460) (1) Data frame handling\nI0202 23:20:40.957861 1754 log.go:181] (0xc000586460) (1) Data frame sent\nI0202 23:20:40.957920 1754 log.go:181] (0xc000168000) (0xc000586460) Stream removed, broadcasting: 1\nI0202 23:20:40.957945 1754 log.go:181] (0xc000168000) Go away received\nI0202 23:20:40.958604 1754 log.go:181] (0xc000168000) (0xc000586460) Stream removed, broadcasting: 1\nI0202 23:20:40.958621 1754 log.go:181] (0xc000168000) (0xc0005870e0) Stream removed, broadcasting: 3\nI0202 23:20:40.958630 1754 log.go:181] (0xc000168000) (0xc000d90000) Stream removed, broadcasting: 5\n" Feb 2 23:20:40.963: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 2 23:20:40.964: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 2 23:20:40.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 2 23:20:50.973: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 23:20:50.973: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 23:20:50.973: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 2 23:20:50.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 23:20:51.228: INFO: stderr: "I0202 23:20:51.112971 1772 log.go:181] (0xc0001b0370) (0xc0008fa000) Create stream\nI0202 23:20:51.113033 1772 log.go:181] (0xc0001b0370) (0xc0008fa000) Stream added, broadcasting: 1\nI0202 23:20:51.115177 1772 log.go:181] (0xc0001b0370) Reply frame received for 1\nI0202 23:20:51.115215 1772 log.go:181] (0xc0001b0370) (0xc00039bae0) Create stream\nI0202 23:20:51.115227 1772 log.go:181] (0xc0001b0370) (0xc00039bae0) Stream added, broadcasting: 3\nI0202 23:20:51.116229 1772 log.go:181] (0xc0001b0370) Reply frame received for 3\nI0202 23:20:51.116257 1772 log.go:181] (0xc0001b0370) (0xc0008c0000) Create stream\nI0202 23:20:51.116268 1772 log.go:181] (0xc0001b0370) (0xc0008c0000) Stream added, broadcasting: 5\nI0202 23:20:51.117282 1772 log.go:181] (0xc0001b0370) Reply frame received for 5\nI0202 23:20:51.214471 1772 log.go:181] (0xc0001b0370) Data frame received for 3\nI0202 23:20:51.214524 1772 log.go:181] (0xc00039bae0) (3) Data frame handling\nI0202 23:20:51.214547 1772 log.go:181] (0xc00039bae0) (3) Data frame sent\nI0202 23:20:51.214590 1772 log.go:181] (0xc0001b0370) Data frame received for 5\nI0202 23:20:51.214607 1772 log.go:181] (0xc0008c0000) (5) Data frame handling\nI0202 23:20:51.214625 1772 log.go:181] (0xc0008c0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0202 23:20:51.214728 1772 log.go:181] (0xc0001b0370) Data frame received for 3\nI0202 23:20:51.214751 1772 log.go:181] (0xc00039bae0) (3) Data frame handling\nI0202 23:20:51.214780 1772 log.go:181] (0xc0001b0370) Data frame received for 5\nI0202 23:20:51.214791 1772 log.go:181] (0xc0008c0000) (5) Data frame handling\nI0202 23:20:51.216566 1772 log.go:181] (0xc0001b0370) Data frame received for 1\nI0202 23:20:51.216600 1772 log.go:181] (0xc0008fa000) (1) Data frame handling\nI0202 23:20:51.216623 1772 log.go:181] (0xc0008fa000) (1) Data frame sent\nI0202 23:20:51.216977 1772 log.go:181] (0xc0001b0370) (0xc0008fa000) Stream removed, broadcasting: 1\nI0202 23:20:51.217060 1772 log.go:181] (0xc0001b0370) Go away received\nI0202 23:20:51.217623 1772 log.go:181] (0xc0001b0370) (0xc0008fa000) Stream removed, broadcasting: 1\nI0202 23:20:51.217655 1772 log.go:181] (0xc0001b0370) (0xc00039bae0) Stream removed, broadcasting: 3\nI0202 23:20:51.217679 1772 log.go:181] (0xc0001b0370) (0xc0008c0000) Stream removed, broadcasting: 5\n" Feb 2 23:20:51.228: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 23:20:51.228: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 23:20:51.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 23:20:51.455: INFO: stderr: "I0202 23:20:51.351785 1790 log.go:181] (0xc0001fc000) (0xc000424d20) Create stream\nI0202 23:20:51.351851 1790 log.go:181] (0xc0001fc000) (0xc000424d20) Stream added, broadcasting: 1\nI0202 23:20:51.353894 1790 log.go:181] (0xc0001fc000) Reply frame received for 1\nI0202 23:20:51.353957 1790 log.go:181] (0xc0001fc000) (0xc00090a8c0) Create stream\nI0202 23:20:51.353970 1790 log.go:181] (0xc0001fc000) (0xc00090a8c0) Stream added, broadcasting: 3\nI0202 23:20:51.354828 1790 log.go:181] (0xc0001fc000) Reply frame received for 3\nI0202 23:20:51.354846 1790 log.go:181] (0xc0001fc000) (0xc000b0afa0) Create stream\nI0202 23:20:51.354852 1790 log.go:181] (0xc0001fc000) (0xc000b0afa0) Stream added, broadcasting: 5\nI0202 23:20:51.355731 1790 log.go:181] (0xc0001fc000) Reply frame received for 5\nI0202 23:20:51.414184 1790 log.go:181] (0xc0001fc000) Data frame received for 5\nI0202 23:20:51.414227 1790 log.go:181] (0xc000b0afa0) (5) Data frame handling\nI0202 23:20:51.414242 1790 log.go:181] (0xc000b0afa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0202 23:20:51.446024 1790 log.go:181] (0xc0001fc000) Data frame received for 3\nI0202 23:20:51.446066 1790 log.go:181] (0xc00090a8c0) (3) Data frame handling\nI0202 23:20:51.446128 1790 log.go:181] (0xc00090a8c0) (3) Data frame sent\nI0202 23:20:51.446212 1790 log.go:181] (0xc0001fc000) Data frame received for 3\nI0202 23:20:51.446233 1790 log.go:181] (0xc00090a8c0) (3) Data frame handling\nI0202 23:20:51.446410 1790 log.go:181] (0xc0001fc000) Data frame received for 5\nI0202 23:20:51.446437 1790 log.go:181] (0xc000b0afa0) (5) Data frame handling\nI0202 23:20:51.448250 1790 log.go:181] (0xc0001fc000) Data frame received for 1\nI0202 23:20:51.448274 1790 log.go:181] (0xc000424d20) (1) Data frame handling\nI0202 23:20:51.448307 1790 log.go:181] (0xc000424d20) (1) Data frame sent\nI0202 23:20:51.448475 1790 log.go:181] (0xc0001fc000) (0xc000424d20) Stream removed, broadcasting: 1\nI0202 23:20:51.448570 1790 log.go:181] (0xc0001fc000) Go away received\nI0202 23:20:51.449045 1790 log.go:181] (0xc0001fc000) (0xc000424d20) Stream removed, broadcasting: 1\nI0202 23:20:51.449081 1790 log.go:181] (0xc0001fc000) (0xc00090a8c0) Stream removed, broadcasting: 3\nI0202 23:20:51.449101 1790 log.go:181] (0xc0001fc000) (0xc000b0afa0) Stream removed, broadcasting: 5\n" Feb 2 23:20:51.455: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 23:20:51.455: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 23:20:51.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 2 23:20:51.749: INFO: stderr: "I0202 23:20:51.640389 1808 log.go:181] (0xc000010000) (0xc0008b4000) Create stream\nI0202 23:20:51.640482 1808 log.go:181] (0xc000010000) (0xc0008b4000) Stream added, broadcasting: 1\nI0202 23:20:51.642244 1808 log.go:181] (0xc000010000) Reply frame received for 1\nI0202 23:20:51.642299 1808 log.go:181] (0xc000010000) (0xc0008b40a0) Create stream\nI0202 23:20:51.642312 1808 log.go:181] (0xc000010000) (0xc0008b40a0) Stream added, broadcasting: 3\nI0202 23:20:51.643242 1808 log.go:181] (0xc000010000) Reply frame received for 3\nI0202 23:20:51.643286 1808 log.go:181] (0xc000010000) (0xc0008b4140) Create stream\nI0202 23:20:51.643313 1808 log.go:181] (0xc000010000) (0xc0008b4140) Stream added, broadcasting: 5\nI0202 23:20:51.643950 1808 log.go:181] (0xc000010000) Reply frame received for 5\nI0202 23:20:51.697924 1808 log.go:181] (0xc000010000) Data frame received for 5\nI0202 23:20:51.697953 1808 log.go:181] (0xc0008b4140) (5) Data frame handling\nI0202 23:20:51.697976 1808 log.go:181] (0xc0008b4140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0202 23:20:51.740060 1808 log.go:181] (0xc000010000) Data frame received for 3\nI0202 23:20:51.740088 1808 log.go:181] (0xc0008b40a0) (3) Data frame handling\nI0202 23:20:51.740104 1808 log.go:181] (0xc0008b40a0) (3) Data frame sent\nI0202 23:20:51.740114 1808 log.go:181] (0xc000010000) Data frame received for 3\nI0202 23:20:51.740123 1808 log.go:181] (0xc0008b40a0) (3) Data frame handling\nI0202 23:20:51.740245 1808 log.go:181] (0xc000010000) Data frame received for 5\nI0202 23:20:51.740271 1808 log.go:181] (0xc0008b4140) (5) Data frame handling\nI0202 23:20:51.742107 1808 log.go:181] (0xc000010000) Data frame received for 1\nI0202 23:20:51.742129 1808 log.go:181] (0xc0008b4000) (1) Data frame handling\nI0202 23:20:51.742144 1808 log.go:181] (0xc0008b4000) (1) Data frame sent\nI0202 23:20:51.742158 1808 log.go:181] (0xc000010000) (0xc0008b4000) Stream removed, broadcasting: 1\nI0202 23:20:51.742177 1808 log.go:181] (0xc000010000) Go away received\nI0202 23:20:51.742642 1808 log.go:181] (0xc000010000) (0xc0008b4000) Stream removed, broadcasting: 1\nI0202 23:20:51.742665 1808 log.go:181] (0xc000010000) (0xc0008b40a0) Stream removed, broadcasting: 3\nI0202 23:20:51.742675 1808 log.go:181] (0xc000010000) (0xc0008b4140) Stream removed, broadcasting: 5\n" Feb 2 23:20:51.749: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 2 23:20:51.749: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 2 23:20:51.749: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 23:20:51.753: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 2 23:21:01.761: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 2 23:21:01.761: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 2 23:21:01.761: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 2 23:21:01.790: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:01.790: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:01.791: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:01.791: INFO: ss-2 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:01.791: INFO: Feb 2 23:21:01.791: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 2 23:21:02.979: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:02.980: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:02.980: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:02.980: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:02.980: INFO: Feb 2 23:21:02.980: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 2 23:21:03.985: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:03.985: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:03.985: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:03.985: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:03.985: INFO: Feb 2 23:21:03.985: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 2 23:21:04.989: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:04.989: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:04.989: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:04.990: INFO: Feb 2 23:21:04.990: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 2 23:21:05.996: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:05.996: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:05.996: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:05.996: INFO: Feb 2 23:21:05.996: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 2 23:21:07.001: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:07.002: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:07.002: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:07.002: INFO: Feb 2 23:21:07.002: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 2 23:21:08.007: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:08.007: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:08.007: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:08.007: INFO: Feb 2 23:21:08.007: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 2 23:21:09.012: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:09.012: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:09.012: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:09.012: INFO: Feb 2 23:21:09.012: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 2 23:21:10.018: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:10.018: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:10.018: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:10.018: INFO: Feb 2 23:21:10.018: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 2 23:21:11.023: INFO: POD NODE PHASE GRACE CONDITIONS Feb 2 23:21:11.023: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:09 +0000 UTC }] Feb 2 23:21:11.023: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-02 23:20:29 +0000 UTC }] Feb 2 23:21:11.023: INFO: Feb 2 23:21:11.023: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4876 Feb 2 23:21:12.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:21:12.164: INFO: rc: 1 Feb 2 23:21:12.165: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 2 23:21:22.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:21:22.488: INFO: rc: 1 Feb 2 23:21:22.488: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 2 23:21:32.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:21:35.974: INFO: rc: 1 Feb 2 23:21:35.974: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 2 23:21:45.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:21:46.111: INFO: rc: 1 Feb 2 23:21:46.111: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 2 23:21:56.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:21:56.250: INFO: rc: 1 Feb 2 23:21:56.250: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 2 23:22:06.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:22:06.387: INFO: rc: 1 Feb 2 23:22:06.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 2 23:22:16.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:22:16.487: INFO: rc: 1 Feb 2 23:22:16.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:22:26.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:22:26.599: INFO: rc: 1 Feb 2 23:22:26.599: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:22:36.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:22:36.703: INFO: rc: 1 Feb 2 23:22:36.703: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:22:46.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:22:46.813: INFO: rc: 1 Feb 2 23:22:46.813: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:22:56.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:22:56.918: INFO: rc: 1 Feb 2 23:22:56.918: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:23:06.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:23:07.022: INFO: rc: 1 Feb 2 23:23:07.022: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:23:17.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:23:17.126: INFO: rc: 1 Feb 2 23:23:17.126: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:23:27.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:23:27.245: INFO: rc: 1 Feb 2 23:23:27.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:23:37.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:23:37.348: INFO: rc: 1 Feb 2 23:23:37.348: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:23:47.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:23:47.449: INFO: rc: 1 Feb 2 23:23:47.449: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:23:57.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:23:57.550: INFO: rc: 1 Feb 2 23:23:57.550: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:24:07.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:24:07.653: INFO: rc: 1 Feb 2 23:24:07.653: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:24:17.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:24:17.752: INFO: rc: 1 Feb 2 23:24:17.752: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:24:27.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:24:27.877: INFO: rc: 1 Feb 2 23:24:27.877: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:24:37.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:24:37.983: INFO: rc: 1 Feb 2 23:24:37.983: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:24:47.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:24:48.088: INFO: rc: 1 Feb 2 23:24:48.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:24:58.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:24:58.183: INFO: rc: 1 Feb 2 23:24:58.183: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:25:08.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:25:08.297: INFO: rc: 1 Feb 2 23:25:08.297: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:25:18.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:25:18.401: INFO: rc: 1 Feb 2 23:25:18.401: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:25:28.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:25:28.502: INFO: rc: 1 Feb 2 23:25:28.502: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:25:38.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:25:38.607: INFO: rc: 1 Feb 2 23:25:38.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:25:48.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:25:48.709: INFO: rc: 1 Feb 2 23:25:48.709: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:25:58.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:25:58.817: INFO: rc: 1 Feb 2 23:25:58.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:26:08.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:26:08.910: INFO: rc: 1 Feb 2 23:26:08.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 2 23:26:18.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-4876 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 2 23:26:19.006: INFO: rc: 1 Feb 2 23:26:19.006: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Feb 2 23:26:19.006: INFO: Scaling statefulset ss to 0 Feb 2 23:26:19.025: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 2 23:26:19.055: INFO: Deleting all statefulset in ns statefulset-4876 Feb 2 23:26:19.058: INFO: Scaling statefulset ss to 0 Feb 2 23:26:19.068: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 23:26:19.070: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:26:19.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4876" for this suite. • [SLOW TEST:370.344 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":309,"completed":138,"skipped":2439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:26:19.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Feb 2 23:26:19.197: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:26:38.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-27" for this suite. • [SLOW TEST:19.653 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":309,"completed":139,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:26:38.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 2 23:26:38.939: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-466 8165f7b1-9801-498a-bd9c-f26345b0d023 4181787 0 2021-02-02 23:26:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-02-02 23:26:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 2 23:26:38.939: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-466 8165f7b1-9801-498a-bd9c-f26345b0d023 4181788 0 2021-02-02 23:26:38 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-02-02 23:26:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:26:38.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-466" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":309,"completed":140,"skipped":2490,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:26:38.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:26:39.507: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Feb 2 23:26:41.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 23:26:43.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905199, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:26:46.593: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:26:47.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8305" for this suite. STEP: Destroying namespace "webhook-8305-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.410 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":309,"completed":141,"skipped":2501,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:26:47.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:26:51.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9494" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":142,"skipped":2503,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:26:51.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Feb 2 23:26:51.637: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6319 create -f -' Feb 2 23:26:52.038: INFO: stderr: "" Feb 2 23:26:52.038: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Feb 2 23:26:53.044: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 23:26:53.044: INFO: Found 0 / 1 Feb 2 23:26:54.129: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 23:26:54.129: INFO: Found 0 / 1 Feb 2 23:26:55.043: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 23:26:55.043: INFO: Found 1 / 1 Feb 2 23:26:55.043: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 2 23:26:55.046: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 23:26:55.046: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 2 23:26:55.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6319 patch pod agnhost-primary-cr8zd -p {"metadata":{"annotations":{"x":"y"}}}' Feb 2 23:26:55.149: INFO: stderr: "" Feb 2 23:26:55.149: INFO: stdout: "pod/agnhost-primary-cr8zd patched\n" STEP: checking annotations Feb 2 23:26:55.166: INFO: Selector matched 1 pods for map[app:agnhost] Feb 2 23:26:55.166: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:26:55.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6319" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":309,"completed":143,"skipped":2516,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:26:55.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 2 23:26:55.308: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:26:55.335: INFO: Number of nodes with available pods: 0 Feb 2 23:26:55.335: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:26:56.386: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:26:56.389: INFO: Number of nodes with available pods: 0 Feb 2 23:26:56.389: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:26:57.341: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:26:57.464: INFO: Number of nodes with available pods: 0 Feb 2 23:26:57.464: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:26:58.397: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:26:58.400: INFO: Number of nodes with available pods: 0 Feb 2 23:26:58.400: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:26:59.342: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:26:59.346: INFO: Number of nodes with available pods: 1 Feb 2 23:26:59.346: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:27:00.340: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:00.368: INFO: Number of nodes with available pods: 2 Feb 2 23:27:00.368: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 2 23:27:00.429: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:00.479: INFO: Number of nodes with available pods: 1 Feb 2 23:27:00.479: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:01.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:01.489: INFO: Number of nodes with available pods: 1 Feb 2 23:27:01.489: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:02.484: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:02.487: INFO: Number of nodes with available pods: 1 Feb 2 23:27:02.487: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:03.483: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:03.486: INFO: Number of nodes with available pods: 1 Feb 2 23:27:03.486: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:04.484: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:04.488: INFO: Number of nodes with available pods: 1 Feb 2 23:27:04.488: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:05.486: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:05.494: INFO: Number of nodes with available pods: 1 Feb 2 23:27:05.494: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:06.523: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:06.527: INFO: Number of nodes with available pods: 1 Feb 2 23:27:06.527: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:07.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:07.490: INFO: Number of nodes with available pods: 1 Feb 2 23:27:07.490: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:08.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:08.489: INFO: Number of nodes with available pods: 1 Feb 2 23:27:08.489: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:09.486: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:09.490: INFO: Number of nodes with available pods: 1 Feb 2 23:27:09.490: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:10.484: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:10.487: INFO: Number of nodes with available pods: 1 Feb 2 23:27:10.487: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:11.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:11.489: INFO: Number of nodes with available pods: 1 Feb 2 23:27:11.489: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:12.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:12.490: INFO: Number of nodes with available pods: 1 Feb 2 23:27:12.490: INFO: Node leguer-worker2 is running more than one daemon pod Feb 2 23:27:13.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:27:13.489: INFO: Number of nodes with available pods: 2 Feb 2 23:27:13.489: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8638, will wait for the garbage collector to delete the pods Feb 2 23:27:13.552: INFO: Deleting DaemonSet.extensions daemon-set took: 7.612373ms Feb 2 23:27:13.653: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.327341ms Feb 2 23:27:20.157: INFO: Number of nodes with available pods: 0 Feb 2 23:27:20.157: INFO: Number of running nodes: 0, number of available pods: 0 Feb 2 23:27:20.160: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4182080"},"items":null} Feb 2 23:27:20.186: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4182080"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:27:20.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8638" for this suite. • [SLOW TEST:25.033 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":309,"completed":144,"skipped":2524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:27:20.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:27:54.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4780" for this suite. • [SLOW TEST:34.667 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":309,"completed":145,"skipped":2547,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:27:54.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 2 23:27:54.965: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:27:55.007: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:27:55.010: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Feb 2 23:27:55.018: INFO: rally-0a12c122-7dnmol6z-vwbwf from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-0a12c122-fagfvvpw-sskvj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:54 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-0a12c122-iqj2mcat-2hfpj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-0a12c122-iqj2mcat-swp7f from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:27:55.018: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:27:55.018: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container chaos-mesh ready: true, restart count 0 Feb 2 23:27:55.018: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:27:55.018: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:27:55.018: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.018: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:27:55.018: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Feb 2 23:27:55.025: INFO: rally-0a12c122-4xacdhsf-44v5r from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-0a12c122-4xacdhsf-5c974 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-0a12c122-7dnmol6z-n9ztn from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-0a12c122-fagfvvpw-cxsgt from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:53 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-0a12c122-lqiac6cu-6fsz6 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-0a12c122-lqiac6cu-99jsp from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:27:55.025: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:27:55.025: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:27:55.025: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:27:55.025: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:27:55.025: INFO: busybox-host-aliasesb377700c-5354-4c13-bc91-bc656bdf4129 from kubelet-test-9494 started at 2021-02-02 23:26:47 +0000 UTC (1 container statuses recorded) Feb 2 23:27:55.025: INFO: Container busybox-host-aliasesb377700c-5354-4c13-bc91-bc656bdf4129 ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3f6d6be4-d2fc-45b7-903f-ea4c40737af9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.13 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-3f6d6be4-d2fc-45b7-903f-ea4c40737af9 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3f6d6be4-d2fc-45b7-903f-ea4c40737af9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:03.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5583" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:308.459 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":309,"completed":146,"skipped":2560,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:03.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-2653/configmap-test-b2b670fa-0afd-4417-acbe-95a66c59ccde STEP: Creating a pod to test consume configMaps Feb 2 23:33:03.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512" in namespace "configmap-2653" to be "Succeeded or Failed" Feb 2 23:33:03.502: INFO: Pod "pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512": Phase="Pending", Reason="", readiness=false. Elapsed: 15.836649ms Feb 2 23:33:05.507: INFO: Pod "pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020962344s Feb 2 23:33:07.511: INFO: Pod "pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02514204s STEP: Saw pod success Feb 2 23:33:07.512: INFO: Pod "pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512" satisfied condition "Succeeded or Failed" Feb 2 23:33:07.514: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512 container env-test: STEP: delete the pod Feb 2 23:33:07.570: INFO: Waiting for pod pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512 to disappear Feb 2 23:33:07.597: INFO: Pod pod-configmaps-1d5155e7-96dd-45a9-9f23-5ef030504512 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:07.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2653" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":147,"skipped":2577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:07.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-29fcd378-1767-406d-a6b3-29372cde0188 STEP: Creating a pod to test consume configMaps Feb 2 23:33:07.733: INFO: Waiting up to 5m0s for pod "pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06" in namespace "configmap-9748" to be "Succeeded or Failed" Feb 2 23:33:07.739: INFO: Pod "pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.223804ms Feb 2 23:33:09.759: INFO: Pod "pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026124254s Feb 2 23:33:11.763: INFO: Pod "pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029948922s Feb 2 23:33:13.766: INFO: Pod "pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033784875s STEP: Saw pod success Feb 2 23:33:13.766: INFO: Pod "pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06" satisfied condition "Succeeded or Failed" Feb 2 23:33:13.770: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06 container agnhost-container: STEP: delete the pod Feb 2 23:33:13.835: INFO: Waiting for pod pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06 to disappear Feb 2 23:33:13.837: INFO: Pod pod-configmaps-2042611f-4407-48a3-88ef-51afe08f5d06 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:13.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9748" for this suite. • [SLOW TEST:6.260 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":148,"skipped":2657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:13.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating api versions Feb 2 23:33:13.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-3995 api-versions' Feb 2 23:33:14.191: INFO: stderr: "" Feb 2 23:33:14.191: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3995" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":309,"completed":149,"skipped":2685,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:14.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Feb 2 23:33:14.307: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.307: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.330: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.330: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.372: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.372: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.450: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:14.450: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 2 23:33:18.020: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment-static:true] Feb 2 23:33:18.020: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment-static:true] Feb 2 23:33:18.925: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Feb 2 23:33:18.977: INFO: observed event type ADDED STEP: waiting for Replicas to scale Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.980: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 0 Feb 2 23:33:18.981: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:18.981: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:18.981: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:18.981: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:18.981: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:18.981: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:18.997: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:18.997: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:19.029: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:19.029: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:19.160: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:19.160: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 2 Feb 2 23:33:19.292: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 STEP: listing Deployments Feb 2 23:33:19.629: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Feb 2 23:33:19.652: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Feb 2 23:33:19.789: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:19.833: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:19.909: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:20.368: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:20.721: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:20.895: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:21.103: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 2 23:33:21.313: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Feb 2 23:33:25.614: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.614: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.614: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.614: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.615: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.615: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.615: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 Feb 2 23:33:25.615: INFO: observed Deployment test-deployment in namespace deployment-5478 with ReadyReplicas 1 STEP: deleting the Deployment Feb 2 23:33:26.226: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.227: INFO: observed event type MODIFIED Feb 2 23:33:26.228: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 2 23:33:26.276: INFO: Log out all the ReplicaSets if there is no deployment created Feb 2 23:33:26.413: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-5478 fb5bd986-1c9d-44ec-b3a7-72a3b0dfa8eb 4183129 3 2021-02-02 23:33:19 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment c0edcaba-0ae1-475d-b05b-f858b6d75c39 0xc000955c37 0xc000955c38}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:33:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0edcaba-0ae1-475d-b05b-f858b6d75c39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000955ce0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:33:26.417: INFO: pod: "test-deployment-768947d6f5-c4g8m": &Pod{ObjectMeta:{test-deployment-768947d6f5-c4g8m test-deployment-768947d6f5- deployment-5478 28915996-cb18-45f5-9ab7-8436ce5808b3 4183136 0 2021-02-02 23:33:25 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 fb5bd986-1c9d-44ec-b3a7-72a3b0dfa8eb 0xc00757ad47 0xc00757ad48}] [] [{kube-controller-manager Update v1 2021-02-02 23:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb5bd986-1c9d-44ec-b3a7-72a3b0dfa8eb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:33:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mn67x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mn67x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mn67x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-02-02 23:33:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:33:26.417: INFO: pod: "test-deployment-768947d6f5-d2lnl": &Pod{ObjectMeta:{test-deployment-768947d6f5-d2lnl test-deployment-768947d6f5- deployment-5478 5f9fac5e-ca9e-4bfb-9ad1-c4b24a05a517 4183108 0 2021-02-02 23:33:20 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 fb5bd986-1c9d-44ec-b3a7-72a3b0dfa8eb 0xc00757af07 0xc00757af08}] [] [{kube-controller-manager Update v1 2021-02-02 23:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb5bd986-1c9d-44ec-b3a7-72a3b0dfa8eb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:33:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mn67x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mn67x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mn67x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.222,StartTime:2021-02-02 23:33:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:33:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://50dcd43d06af162e3c5c1a6199d78a2c62b40a44e3ae6116b09cf9a48f5cb21a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 2 23:33:26.417: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-5478 213d2bb5-2241-4017-9134-bbed3aa47f36 4183133 4 2021-02-02 23:33:18 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment c0edcaba-0ae1-475d-b05b-f858b6d75c39 0xc000955d57 0xc000955d58}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:33:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0edcaba-0ae1-475d-b05b-f858b6d75c39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000955de8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:33:26.421: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-5478 673b5618-508f-4a80-a25e-c4b5adbe5343 4183043 2 2021-02-02 23:33:14 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment c0edcaba-0ae1-475d-b05b-f858b6d75c39 0xc000955ee7 0xc000955ee8}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:33:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c0edcaba-0ae1-475d-b05b-f858b6d75c39\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048e8020 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 2 23:33:26.424: INFO: pod: "test-deployment-8b6954bfb-gw6s2": &Pod{ObjectMeta:{test-deployment-8b6954bfb-gw6s2 test-deployment-8b6954bfb- deployment-5478 a3f4e748-40f5-452d-96f6-b07b3e383bed 4183002 0 2021-02-02 23:33:14 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb 673b5618-508f-4a80-a25e-c4b5adbe5343 0xc003db29e7 0xc003db29e8}] [] [{kube-controller-manager Update v1 2021-02-02 23:33:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"673b5618-508f-4a80-a25e-c4b5adbe5343\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:33:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.221\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mn67x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mn67x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mn67x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:33:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.221,StartTime:2021-02-02 23:33:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:33:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://d41c15979c00fa897b80bb31f0b64fbf223ee2cf5451e8726d4a901224babd84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:26.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5478" for this suite. • [SLOW TEST:12.230 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":309,"completed":150,"skipped":2702,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:26.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:33:28.674: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:33:30.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905609, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905609, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905609, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905608, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:33:33.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:33:33.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6480-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:35.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2536" for this suite. STEP: Destroying namespace "webhook-2536-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.768 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":309,"completed":151,"skipped":2708,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:35.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:33:40.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6450" for this suite. • [SLOW TEST:5.252 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":309,"completed":152,"skipped":2718,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:33:40.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0202 23:33:50.606312 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 2 23:34:52.626: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:34:52.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8014" for this suite. • [SLOW TEST:72.181 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":309,"completed":153,"skipped":2719,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:34:52.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:34:53.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:34:55.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 23:34:57.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905693, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:35:00.785: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:00.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3955" for this suite. STEP: Destroying namespace "webhook-3955-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.421 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":309,"completed":154,"skipped":2736,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:01.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:35:02.057: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:35:04.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905702, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905702, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905702, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747905701, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:35:07.137: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 2 23:35:11.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=webhook-7099 attach --namespace=webhook-7099 to-be-attached-pod -i -c=container1' Feb 2 23:35:14.362: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:14.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7099" for this suite. STEP: Destroying namespace "webhook-7099-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":309,"completed":155,"skipped":2745,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:14.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:14.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8027" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":309,"completed":156,"skipped":2760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:14.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-f2e14bc8-96ff-4a9d-bd3a-0fcbf215bfd6 STEP: Creating a pod to test consume secrets Feb 2 23:35:15.181: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94" in namespace "projected-6330" to be "Succeeded or Failed" Feb 2 23:35:15.196: INFO: Pod "pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94": Phase="Pending", Reason="", readiness=false. Elapsed: 14.520488ms Feb 2 23:35:17.200: INFO: Pod "pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019352787s Feb 2 23:35:19.205: INFO: Pod "pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023849047s Feb 2 23:35:21.221: INFO: Pod "pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039962373s STEP: Saw pod success Feb 2 23:35:21.221: INFO: Pod "pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94" satisfied condition "Succeeded or Failed" Feb 2 23:35:21.224: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94 container projected-secret-volume-test: STEP: delete the pod Feb 2 23:35:21.278: INFO: Waiting for pod pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94 to disappear Feb 2 23:35:21.286: INFO: Pod pod-projected-secrets-a9743d18-c630-4a8f-bdf1-66d991f06c94 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6330" for this suite. • [SLOW TEST:6.620 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":157,"skipped":2783,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:21.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-a545aa36-d807-4235-a550-7c1d52c45cf3 STEP: Creating a pod to test consume secrets Feb 2 23:35:21.432: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d" in namespace "projected-7874" to be "Succeeded or Failed" Feb 2 23:35:21.436: INFO: Pod "pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.387818ms Feb 2 23:35:23.440: INFO: Pod "pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091263s Feb 2 23:35:25.445: INFO: Pod "pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01240754s STEP: Saw pod success Feb 2 23:35:25.445: INFO: Pod "pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d" satisfied condition "Succeeded or Failed" Feb 2 23:35:25.447: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d container projected-secret-volume-test: STEP: delete the pod Feb 2 23:35:25.856: INFO: Waiting for pod pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d to disappear Feb 2 23:35:25.860: INFO: Pod pod-projected-secrets-8e2ce414-e88f-4632-b1c4-e3f334b73e5d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:25.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7874" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":158,"skipped":2790,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:25.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token STEP: reading a file in the container Feb 2 23:35:30.558: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4156 pod-service-account-ba9c8452-f563-47f3-a44c-c9b178864e5e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 2 23:35:30.804: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4156 pod-service-account-ba9c8452-f563-47f3-a44c-c9b178864e5e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 2 23:35:31.011: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4156 pod-service-account-ba9c8452-f563-47f3-a44c-c9b178864e5e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:31.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4156" for this suite. • [SLOW TEST:5.342 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":309,"completed":159,"skipped":2809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:31.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:38.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2420" for this suite. • [SLOW TEST:7.089 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":309,"completed":160,"skipped":2840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:38.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:35:38.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5381" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":309,"completed":161,"skipped":2866,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:35:38.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9212 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9212 STEP: creating replication controller externalsvc in namespace services-9212 I0202 23:35:39.299568 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9212, replica count: 2 I0202 23:35:42.349922 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:35:45.350146 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Feb 2 23:35:45.401: INFO: Creating new exec pod Feb 2 23:35:49.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-9212 exec execpodp92l7 -- /bin/sh -x -c nslookup clusterip-service.services-9212.svc.cluster.local' Feb 2 23:35:49.745: INFO: stderr: "I0202 23:35:49.617593 2512 log.go:181] (0xc000121550) (0xc000b7e8c0) Create stream\nI0202 23:35:49.617704 2512 log.go:181] (0xc000121550) (0xc000b7e8c0) Stream added, broadcasting: 1\nI0202 23:35:49.622470 2512 log.go:181] (0xc000121550) Reply frame received for 1\nI0202 23:35:49.622528 2512 log.go:181] (0xc000121550) (0xc000b7e000) Create stream\nI0202 23:35:49.622545 2512 log.go:181] (0xc000121550) (0xc000b7e000) Stream added, broadcasting: 3\nI0202 23:35:49.623547 2512 log.go:181] (0xc000121550) Reply frame received for 3\nI0202 23:35:49.623582 2512 log.go:181] (0xc000121550) (0xc000b7e0a0) Create stream\nI0202 23:35:49.623590 2512 log.go:181] (0xc000121550) (0xc000b7e0a0) Stream added, broadcasting: 5\nI0202 23:35:49.624664 2512 log.go:181] (0xc000121550) Reply frame received for 5\nI0202 23:35:49.723672 2512 log.go:181] (0xc000121550) Data frame received for 5\nI0202 23:35:49.723705 2512 log.go:181] (0xc000b7e0a0) (5) Data frame handling\nI0202 23:35:49.723734 2512 log.go:181] (0xc000b7e0a0) (5) Data frame sent\n+ nslookup clusterip-service.services-9212.svc.cluster.local\nI0202 23:35:49.736983 2512 log.go:181] (0xc000121550) Data frame received for 3\nI0202 23:35:49.737016 2512 log.go:181] (0xc000b7e000) (3) Data frame handling\nI0202 23:35:49.737036 2512 log.go:181] (0xc000b7e000) (3) Data frame sent\nI0202 23:35:49.738095 2512 log.go:181] (0xc000121550) Data frame received for 3\nI0202 23:35:49.738123 2512 log.go:181] (0xc000b7e000) (3) Data frame handling\nI0202 23:35:49.738141 2512 log.go:181] (0xc000b7e000) (3) Data frame sent\nI0202 23:35:49.738766 2512 log.go:181] (0xc000121550) Data frame received for 5\nI0202 23:35:49.738783 2512 log.go:181] (0xc000b7e0a0) (5) Data frame handling\nI0202 23:35:49.738812 2512 log.go:181] (0xc000121550) Data frame received for 3\nI0202 23:35:49.738834 2512 log.go:181] (0xc000b7e000) (3) Data frame handling\nI0202 23:35:49.740373 2512 log.go:181] (0xc000121550) Data frame received for 1\nI0202 23:35:49.740385 2512 log.go:181] (0xc000b7e8c0) (1) Data frame handling\nI0202 23:35:49.740392 2512 log.go:181] (0xc000b7e8c0) (1) Data frame sent\nI0202 23:35:49.740402 2512 log.go:181] (0xc000121550) (0xc000b7e8c0) Stream removed, broadcasting: 1\nI0202 23:35:49.740412 2512 log.go:181] (0xc000121550) Go away received\nI0202 23:35:49.740781 2512 log.go:181] (0xc000121550) (0xc000b7e8c0) Stream removed, broadcasting: 1\nI0202 23:35:49.740797 2512 log.go:181] (0xc000121550) (0xc000b7e000) Stream removed, broadcasting: 3\nI0202 23:35:49.740805 2512 log.go:181] (0xc000121550) (0xc000b7e0a0) Stream removed, broadcasting: 5\n" Feb 2 23:35:49.745: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9212.svc.cluster.local\tcanonical name = externalsvc.services-9212.svc.cluster.local.\nName:\texternalsvc.services-9212.svc.cluster.local\nAddress: 10.96.21.124\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9212, will wait for the garbage collector to delete the pods Feb 2 23:35:49.805: INFO: Deleting ReplicationController externalsvc took: 6.618372ms Feb 2 23:35:50.405: INFO: Terminating ReplicationController externalsvc pods took: 600.24112ms Feb 2 23:36:10.253: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:36:10.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9212" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:31.917 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":309,"completed":162,"skipped":2875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:36:10.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Feb 2 23:36:10.429: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:36:18.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7758" for this suite. • [SLOW TEST:8.131 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":309,"completed":163,"skipped":2900,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:36:18.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9346.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9346.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:36:24.631: INFO: DNS probes using dns-test-c4dc9f40-a242-4057-b47d-0bdb6981c724 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9346.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9346.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:36:30.884: INFO: File wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:30.888: INFO: File jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:30.888: INFO: Lookups using dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c failed for: [wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local] Feb 2 23:36:35.894: INFO: File wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:35.897: INFO: File jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:35.897: INFO: Lookups using dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c failed for: [wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local] Feb 2 23:36:40.895: INFO: File wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:40.900: INFO: File jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:40.900: INFO: Lookups using dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c failed for: [wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local] Feb 2 23:36:45.893: INFO: File wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:45.897: INFO: File jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:45.897: INFO: Lookups using dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c failed for: [wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local] Feb 2 23:36:50.893: INFO: File wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:50.896: INFO: File jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local from pod dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 2 23:36:50.896: INFO: Lookups using dns-9346/dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c failed for: [wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local] Feb 2 23:36:55.898: INFO: DNS probes using dns-test-cf21bd44-fa0a-4044-a2f9-1a84385fc02c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9346.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9346.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9346.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9346.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:37:04.633: INFO: DNS probes using dns-test-95cee5b7-68ff-4d3d-892e-b0d18cdc1c98 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:37:04.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9346" for this suite. • [SLOW TEST:46.787 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":309,"completed":164,"skipped":2902,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:37:05.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-525 STEP: creating service affinity-nodeport in namespace services-525 STEP: creating replication controller affinity-nodeport in namespace services-525 I0202 23:37:05.399897 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-525, replica count: 3 I0202 23:37:08.450312 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:37:11.450628 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:37:14.450900 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:37:14.461: INFO: Creating new exec pod Feb 2 23:37:19.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-525 exec execpod-affinitymxqrc -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Feb 2 23:37:19.723: INFO: stderr: "I0202 23:37:19.621350 2530 log.go:181] (0xc000aa8000) (0xc000b30000) Create stream\nI0202 23:37:19.621435 2530 log.go:181] (0xc000aa8000) (0xc000b30000) Stream added, broadcasting: 1\nI0202 23:37:19.623372 2530 log.go:181] (0xc000aa8000) Reply frame received for 1\nI0202 23:37:19.623421 2530 log.go:181] (0xc000aa8000) (0xc000b300a0) Create stream\nI0202 23:37:19.623437 2530 log.go:181] (0xc000aa8000) (0xc000b300a0) Stream added, broadcasting: 3\nI0202 23:37:19.624362 2530 log.go:181] (0xc000aa8000) Reply frame received for 3\nI0202 23:37:19.624405 2530 log.go:181] (0xc000aa8000) (0xc0000cdd60) Create stream\nI0202 23:37:19.624421 2530 log.go:181] (0xc000aa8000) (0xc0000cdd60) Stream added, broadcasting: 5\nI0202 23:37:19.625418 2530 log.go:181] (0xc000aa8000) Reply frame received for 5\nI0202 23:37:19.715070 2530 log.go:181] (0xc000aa8000) Data frame received for 5\nI0202 23:37:19.715115 2530 log.go:181] (0xc0000cdd60) (5) Data frame handling\nI0202 23:37:19.715152 2530 log.go:181] (0xc0000cdd60) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0202 23:37:19.715594 2530 log.go:181] (0xc000aa8000) Data frame received for 5\nI0202 23:37:19.715631 2530 log.go:181] (0xc0000cdd60) (5) Data frame handling\nI0202 23:37:19.715665 2530 log.go:181] (0xc0000cdd60) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0202 23:37:19.715890 2530 log.go:181] (0xc000aa8000) Data frame received for 5\nI0202 23:37:19.715924 2530 log.go:181] (0xc0000cdd60) (5) Data frame handling\nI0202 23:37:19.716135 2530 log.go:181] (0xc000aa8000) Data frame received for 3\nI0202 23:37:19.716159 2530 log.go:181] (0xc000b300a0) (3) Data frame handling\nI0202 23:37:19.717824 2530 log.go:181] (0xc000aa8000) Data frame received for 1\nI0202 23:37:19.717841 2530 log.go:181] (0xc000b30000) (1) Data frame handling\nI0202 23:37:19.717850 2530 log.go:181] (0xc000b30000) (1) Data frame sent\nI0202 23:37:19.717861 2530 log.go:181] (0xc000aa8000) (0xc000b30000) Stream removed, broadcasting: 1\nI0202 23:37:19.717898 2530 log.go:181] (0xc000aa8000) Go away received\nI0202 23:37:19.718232 2530 log.go:181] (0xc000aa8000) (0xc000b30000) Stream removed, broadcasting: 1\nI0202 23:37:19.718251 2530 log.go:181] (0xc000aa8000) (0xc000b300a0) Stream removed, broadcasting: 3\nI0202 23:37:19.718259 2530 log.go:181] (0xc000aa8000) (0xc0000cdd60) Stream removed, broadcasting: 5\n" Feb 2 23:37:19.724: INFO: stdout: "" Feb 2 23:37:19.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-525 exec execpod-affinitymxqrc -- /bin/sh -x -c nc -zv -t -w 2 10.96.96.94 80' Feb 2 23:37:19.927: INFO: stderr: "I0202 23:37:19.849920 2548 log.go:181] (0xc0001cca50) (0xc000985040) Create stream\nI0202 23:37:19.849972 2548 log.go:181] (0xc0001cca50) (0xc000985040) Stream added, broadcasting: 1\nI0202 23:37:19.852475 2548 log.go:181] (0xc0001cca50) Reply frame received for 1\nI0202 23:37:19.852551 2548 log.go:181] (0xc0001cca50) (0xc000a82000) Create stream\nI0202 23:37:19.852588 2548 log.go:181] (0xc0001cca50) (0xc000a82000) Stream added, broadcasting: 3\nI0202 23:37:19.853791 2548 log.go:181] (0xc0001cca50) Reply frame received for 3\nI0202 23:37:19.853826 2548 log.go:181] (0xc0001cca50) (0xc000a820a0) Create stream\nI0202 23:37:19.853839 2548 log.go:181] (0xc0001cca50) (0xc000a820a0) Stream added, broadcasting: 5\nI0202 23:37:19.854596 2548 log.go:181] (0xc0001cca50) Reply frame received for 5\nI0202 23:37:19.918925 2548 log.go:181] (0xc0001cca50) Data frame received for 5\nI0202 23:37:19.918967 2548 log.go:181] (0xc000a820a0) (5) Data frame handling\nI0202 23:37:19.918982 2548 log.go:181] (0xc000a820a0) (5) Data frame sent\nI0202 23:37:19.918994 2548 log.go:181] (0xc0001cca50) Data frame received for 5\nI0202 23:37:19.919004 2548 log.go:181] (0xc000a820a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.96.94 80\nConnection to 10.96.96.94 80 port [tcp/http] succeeded!\nI0202 23:37:19.919037 2548 log.go:181] (0xc0001cca50) Data frame received for 3\nI0202 23:37:19.919057 2548 log.go:181] (0xc000a82000) (3) Data frame handling\nI0202 23:37:19.920469 2548 log.go:181] (0xc0001cca50) Data frame received for 1\nI0202 23:37:19.920510 2548 log.go:181] (0xc000985040) (1) Data frame handling\nI0202 23:37:19.920542 2548 log.go:181] (0xc000985040) (1) Data frame sent\nI0202 23:37:19.920568 2548 log.go:181] (0xc0001cca50) (0xc000985040) Stream removed, broadcasting: 1\nI0202 23:37:19.920690 2548 log.go:181] (0xc0001cca50) Go away received\nI0202 23:37:19.921237 2548 log.go:181] (0xc0001cca50) (0xc000985040) Stream removed, broadcasting: 1\nI0202 23:37:19.921263 2548 log.go:181] (0xc0001cca50) (0xc000a82000) Stream removed, broadcasting: 3\nI0202 23:37:19.921276 2548 log.go:181] (0xc0001cca50) (0xc000a820a0) Stream removed, broadcasting: 5\n" Feb 2 23:37:19.927: INFO: stdout: "" Feb 2 23:37:19.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-525 exec execpod-affinitymxqrc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30135' Feb 2 23:37:20.142: INFO: stderr: "I0202 23:37:20.065188 2567 log.go:181] (0xc00018c370) (0xc000207220) Create stream\nI0202 23:37:20.065298 2567 log.go:181] (0xc00018c370) (0xc000207220) Stream added, broadcasting: 1\nI0202 23:37:20.068538 2567 log.go:181] (0xc00018c370) Reply frame received for 1\nI0202 23:37:20.068626 2567 log.go:181] (0xc00018c370) (0xc000a0e640) Create stream\nI0202 23:37:20.068657 2567 log.go:181] (0xc00018c370) (0xc000a0e640) Stream added, broadcasting: 3\nI0202 23:37:20.070052 2567 log.go:181] (0xc00018c370) Reply frame received for 3\nI0202 23:37:20.070112 2567 log.go:181] (0xc00018c370) (0xc000a0ec80) Create stream\nI0202 23:37:20.070136 2567 log.go:181] (0xc00018c370) (0xc000a0ec80) Stream added, broadcasting: 5\nI0202 23:37:20.071268 2567 log.go:181] (0xc00018c370) Reply frame received for 5\nI0202 23:37:20.134521 2567 log.go:181] (0xc00018c370) Data frame received for 5\nI0202 23:37:20.134564 2567 log.go:181] (0xc000a0ec80) (5) Data frame handling\nI0202 23:37:20.134580 2567 log.go:181] (0xc000a0ec80) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 30135\nConnection to 172.18.0.13 30135 port [tcp/30135] succeeded!\nI0202 23:37:20.134608 2567 log.go:181] (0xc00018c370) Data frame received for 3\nI0202 23:37:20.134635 2567 log.go:181] (0xc000a0e640) (3) Data frame handling\nI0202 23:37:20.134661 2567 log.go:181] (0xc00018c370) Data frame received for 5\nI0202 23:37:20.134689 2567 log.go:181] (0xc000a0ec80) (5) Data frame handling\nI0202 23:37:20.135941 2567 log.go:181] (0xc00018c370) Data frame received for 1\nI0202 23:37:20.135970 2567 log.go:181] (0xc000207220) (1) Data frame handling\nI0202 23:37:20.135989 2567 log.go:181] (0xc000207220) (1) Data frame sent\nI0202 23:37:20.136007 2567 log.go:181] (0xc00018c370) (0xc000207220) Stream removed, broadcasting: 1\nI0202 23:37:20.136027 2567 log.go:181] (0xc00018c370) Go away received\nI0202 23:37:20.136378 2567 log.go:181] (0xc00018c370) (0xc000207220) Stream removed, broadcasting: 1\nI0202 23:37:20.136396 2567 log.go:181] (0xc00018c370) (0xc000a0e640) Stream removed, broadcasting: 3\nI0202 23:37:20.136404 2567 log.go:181] (0xc00018c370) (0xc000a0ec80) Stream removed, broadcasting: 5\n" Feb 2 23:37:20.142: INFO: stdout: "" Feb 2 23:37:20.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-525 exec execpod-affinitymxqrc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30135' Feb 2 23:37:20.330: INFO: stderr: "I0202 23:37:20.265079 2585 log.go:181] (0xc0009e0000) (0xc000d80000) Create stream\nI0202 23:37:20.265159 2585 log.go:181] (0xc0009e0000) (0xc000d80000) Stream added, broadcasting: 1\nI0202 23:37:20.267622 2585 log.go:181] (0xc0009e0000) Reply frame received for 1\nI0202 23:37:20.267663 2585 log.go:181] (0xc0009e0000) (0xc000b090e0) Create stream\nI0202 23:37:20.267676 2585 log.go:181] (0xc0009e0000) (0xc000b090e0) Stream added, broadcasting: 3\nI0202 23:37:20.268617 2585 log.go:181] (0xc0009e0000) Reply frame received for 3\nI0202 23:37:20.268650 2585 log.go:181] (0xc0009e0000) (0xc000b09360) Create stream\nI0202 23:37:20.268659 2585 log.go:181] (0xc0009e0000) (0xc000b09360) Stream added, broadcasting: 5\nI0202 23:37:20.269809 2585 log.go:181] (0xc0009e0000) Reply frame received for 5\nI0202 23:37:20.322697 2585 log.go:181] (0xc0009e0000) Data frame received for 5\nI0202 23:37:20.322722 2585 log.go:181] (0xc000b09360) (5) Data frame handling\nI0202 23:37:20.322730 2585 log.go:181] (0xc000b09360) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30135\nConnection to 172.18.0.12 30135 port [tcp/30135] succeeded!\nI0202 23:37:20.322743 2585 log.go:181] (0xc0009e0000) Data frame received for 3\nI0202 23:37:20.322748 2585 log.go:181] (0xc000b090e0) (3) Data frame handling\nI0202 23:37:20.322825 2585 log.go:181] (0xc0009e0000) Data frame received for 5\nI0202 23:37:20.322843 2585 log.go:181] (0xc000b09360) (5) Data frame handling\nI0202 23:37:20.324646 2585 log.go:181] (0xc0009e0000) Data frame received for 1\nI0202 23:37:20.324668 2585 log.go:181] (0xc000d80000) (1) Data frame handling\nI0202 23:37:20.324686 2585 log.go:181] (0xc000d80000) (1) Data frame sent\nI0202 23:37:20.324700 2585 log.go:181] (0xc0009e0000) (0xc000d80000) Stream removed, broadcasting: 1\nI0202 23:37:20.325020 2585 log.go:181] (0xc0009e0000) Go away received\nI0202 23:37:20.325085 2585 log.go:181] (0xc0009e0000) (0xc000d80000) Stream removed, broadcasting: 1\nI0202 23:37:20.325097 2585 log.go:181] (0xc0009e0000) (0xc000b090e0) Stream removed, broadcasting: 3\nI0202 23:37:20.325103 2585 log.go:181] (0xc0009e0000) (0xc000b09360) Stream removed, broadcasting: 5\n" Feb 2 23:37:20.330: INFO: stdout: "" Feb 2 23:37:20.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-525 exec execpod-affinitymxqrc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:30135/ ; done' Feb 2 23:37:20.636: INFO: stderr: "I0202 23:37:20.474619 2604 log.go:181] (0xc00003a0b0) (0xc0004250e0) Create stream\nI0202 23:37:20.474699 2604 log.go:181] (0xc00003a0b0) (0xc0004250e0) Stream added, broadcasting: 1\nI0202 23:37:20.476982 2604 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0202 23:37:20.477056 2604 log.go:181] (0xc00003a0b0) (0xc000425900) Create stream\nI0202 23:37:20.477084 2604 log.go:181] (0xc00003a0b0) (0xc000425900) Stream added, broadcasting: 3\nI0202 23:37:20.478139 2604 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0202 23:37:20.478181 2604 log.go:181] (0xc00003a0b0) (0xc000c3e140) Create stream\nI0202 23:37:20.478194 2604 log.go:181] (0xc00003a0b0) (0xc000c3e140) Stream added, broadcasting: 5\nI0202 23:37:20.479205 2604 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0202 23:37:20.538288 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.538351 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.538376 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.538407 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.538433 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.538467 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.546805 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.546837 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.546871 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.547307 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.547327 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.547355 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\nI0202 23:37:20.547375 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.547391 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.547427 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.547470 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.547490 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.547509 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\nI0202 23:37:20.551030 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.551049 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.551069 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.551904 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.551970 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.551998 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.552027 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.552056 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.552080 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.557767 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.557863 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.557881 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.557897 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.557905 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.557920 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.557930 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.557938 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.557957 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.562088 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.562104 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.562112 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.562728 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.562754 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.562781 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.562804 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.562827 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.562844 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.567045 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.567062 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.567074 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.567422 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.567437 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.567449 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.567462 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.567475 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.567485 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.571813 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.571835 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.571852 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.572376 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.572394 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.572403 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.572421 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.572448 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.572471 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.577864 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.577879 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.577888 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.578493 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.578516 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.578529 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.578548 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.578566 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.578587 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\nI0202 23:37:20.578604 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.578618 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.578644 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\nI0202 23:37:20.583514 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.583536 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.583549 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.584314 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.584330 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.584339 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.584361 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.584387 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.584405 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.588073 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.588093 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.588115 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.588564 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.588581 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.588606 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.588622 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.588633 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.588651 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.593883 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.593904 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.593914 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.594630 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.594656 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.594683 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.594835 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.594866 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.594887 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.597957 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.597988 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.598017 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.598313 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.598331 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.598346 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.598355 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\nI0202 23:37:20.598361 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.598366 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.598383 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\nI0202 23:37:20.598389 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.598398 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.603074 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.603089 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.603101 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.603914 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.603937 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.603949 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.603983 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.604064 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.604082 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.609061 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.609083 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.609097 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.610042 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.610055 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.610065 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.610075 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.610082 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.610096 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.615454 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.615473 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.615484 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.616231 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.616245 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.616255 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.616295 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.616331 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.616384 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.621935 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.621947 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.621954 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.622577 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.622589 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.622595 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.622616 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.622638 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.622658 2604 log.go:181] (0xc000c3e140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30135/\nI0202 23:37:20.626958 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.626976 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.626987 2604 log.go:181] (0xc000425900) (3) Data frame sent\nI0202 23:37:20.627669 2604 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0202 23:37:20.627685 2604 log.go:181] (0xc000425900) (3) Data frame handling\nI0202 23:37:20.627708 2604 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0202 23:37:20.627724 2604 log.go:181] (0xc000c3e140) (5) Data frame handling\nI0202 23:37:20.629689 2604 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0202 23:37:20.629737 2604 log.go:181] (0xc0004250e0) (1) Data frame handling\nI0202 23:37:20.629803 2604 log.go:181] (0xc0004250e0) (1) Data frame sent\nI0202 23:37:20.629843 2604 log.go:181] (0xc00003a0b0) (0xc0004250e0) Stream removed, broadcasting: 1\nI0202 23:37:20.629870 2604 log.go:181] (0xc00003a0b0) Go away received\nI0202 23:37:20.630332 2604 log.go:181] (0xc00003a0b0) (0xc0004250e0) Stream removed, broadcasting: 1\nI0202 23:37:20.630357 2604 log.go:181] (0xc00003a0b0) (0xc000425900) Stream removed, broadcasting: 3\nI0202 23:37:20.630371 2604 log.go:181] (0xc00003a0b0) (0xc000c3e140) Stream removed, broadcasting: 5\n" Feb 2 23:37:20.637: INFO: stdout: "\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj\naffinity-nodeport-zwhnj" Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Received response from host: affinity-nodeport-zwhnj Feb 2 23:37:20.637: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-525, will wait for the garbage collector to delete the pods Feb 2 23:37:20.836: INFO: Deleting ReplicationController affinity-nodeport took: 76.091216ms Feb 2 23:37:21.536: INFO: Terminating ReplicationController affinity-nodeport pods took: 700.273412ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:38:20.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-525" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:75.061 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":165,"skipped":2918,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:38:20.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Feb 2 23:38:20.420: INFO: Waiting up to 5m0s for pod "downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55" in namespace "downward-api-745" to be "Succeeded or Failed" Feb 2 23:38:20.457: INFO: Pod "downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55": Phase="Pending", Reason="", readiness=false. Elapsed: 36.243798ms Feb 2 23:38:22.631: INFO: Pod "downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210630776s Feb 2 23:38:24.636: INFO: Pod "downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215448784s STEP: Saw pod success Feb 2 23:38:24.636: INFO: Pod "downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55" satisfied condition "Succeeded or Failed" Feb 2 23:38:24.644: INFO: Trying to get logs from node leguer-worker pod downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55 container dapi-container: STEP: delete the pod Feb 2 23:38:24.689: INFO: Waiting for pod downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55 to disappear Feb 2 23:38:24.698: INFO: Pod downward-api-ff80ed2d-f9e5-437e-8578-f0467cf58d55 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:38:24.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-745" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":309,"completed":166,"skipped":2932,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:38:24.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Starting the proxy Feb 2 23:38:24.770: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4218 proxy --unix-socket=/tmp/kubectl-proxy-unix521697117/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:38:24.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4218" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":309,"completed":167,"skipped":2950,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:38:24.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-secret-wgwd STEP: Creating a pod to test atomic-volume-subpath Feb 2 23:38:25.030: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wgwd" in namespace "subpath-1235" to be "Succeeded or Failed" Feb 2 23:38:25.049: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.330999ms Feb 2 23:38:27.053: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022626133s Feb 2 23:38:29.056: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 4.02545521s Feb 2 23:38:31.060: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 6.030064421s Feb 2 23:38:33.065: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 8.035118349s Feb 2 23:38:35.070: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 10.040112513s Feb 2 23:38:37.076: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 12.045610325s Feb 2 23:38:39.080: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 14.049531472s Feb 2 23:38:41.085: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 16.054377323s Feb 2 23:38:43.090: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 18.059300515s Feb 2 23:38:45.099: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 20.068948816s Feb 2 23:38:47.104: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Running", Reason="", readiness=true. Elapsed: 22.07382604s Feb 2 23:38:49.108: INFO: Pod "pod-subpath-test-secret-wgwd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.077514635s STEP: Saw pod success Feb 2 23:38:49.108: INFO: Pod "pod-subpath-test-secret-wgwd" satisfied condition "Succeeded or Failed" Feb 2 23:38:49.111: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-secret-wgwd container test-container-subpath-secret-wgwd: STEP: delete the pod Feb 2 23:38:49.127: INFO: Waiting for pod pod-subpath-test-secret-wgwd to disappear Feb 2 23:38:49.131: INFO: Pod pod-subpath-test-secret-wgwd no longer exists STEP: Deleting pod pod-subpath-test-secret-wgwd Feb 2 23:38:49.131: INFO: Deleting pod "pod-subpath-test-secret-wgwd" in namespace "subpath-1235" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:38:49.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1235" for this suite. • [SLOW TEST:24.284 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":309,"completed":168,"skipped":2971,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:38:49.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:38:49.230: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 2 23:38:52.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2541 --namespace=crd-publish-openapi-2541 create -f -' Feb 2 23:38:56.439: INFO: stderr: "" Feb 2 23:38:56.439: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 2 23:38:56.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2541 --namespace=crd-publish-openapi-2541 delete e2e-test-crd-publish-openapi-6876-crds test-cr' Feb 2 23:38:56.541: INFO: stderr: "" Feb 2 23:38:56.541: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 2 23:38:56.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2541 --namespace=crd-publish-openapi-2541 apply -f -' Feb 2 23:38:56.862: INFO: stderr: "" Feb 2 23:38:56.862: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 2 23:38:56.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2541 --namespace=crd-publish-openapi-2541 delete e2e-test-crd-publish-openapi-6876-crds test-cr' Feb 2 23:38:56.980: INFO: stderr: "" Feb 2 23:38:56.980: INFO: stdout: "e2e-test-crd-publish-openapi-6876-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 2 23:38:56.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2541 explain e2e-test-crd-publish-openapi-6876-crds' Feb 2 23:38:57.283: INFO: stderr: "" Feb 2 23:38:57.283: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6876-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:00.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2541" for this suite. • [SLOW TEST:11.692 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":309,"completed":169,"skipped":2973,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:00.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:17.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6328" for this suite. • [SLOW TEST:17.130 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":309,"completed":170,"skipped":2979,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:17.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap that has name configmap-test-emptyKey-400acf4c-60a1-4ca1-b4b5-560d24f7f8aa [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:18.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6748" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":309,"completed":171,"skipped":2982,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:18.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-2c738f07-9be2-44b1-9487-934750e902db STEP: Creating a pod to test consume secrets Feb 2 23:39:18.182: INFO: Waiting up to 5m0s for pod "pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5" in namespace "secrets-3844" to be "Succeeded or Failed" Feb 2 23:39:18.207: INFO: Pod "pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.90372ms Feb 2 23:39:20.212: INFO: Pod "pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030196374s Feb 2 23:39:22.216: INFO: Pod "pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034281192s STEP: Saw pod success Feb 2 23:39:22.216: INFO: Pod "pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5" satisfied condition "Succeeded or Failed" Feb 2 23:39:22.219: INFO: Trying to get logs from node leguer-worker pod pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5 container secret-volume-test: STEP: delete the pod Feb 2 23:39:22.363: INFO: Waiting for pod pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5 to disappear Feb 2 23:39:22.443: INFO: Pod pod-secrets-3cfffc9f-e888-4aa7-871a-3793210523a5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:22.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3844" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":172,"skipped":2982,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:22.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:33.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-946" for this suite. • [SLOW TEST:11.181 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":309,"completed":173,"skipped":2987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:33.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod Feb 2 23:39:33.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 create -f -' Feb 2 23:39:34.172: INFO: stderr: "" Feb 2 23:39:34.172: INFO: stdout: "pod/pause created\n" Feb 2 23:39:34.172: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 2 23:39:34.172: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8512" to be "running and ready" Feb 2 23:39:34.236: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 63.194053ms Feb 2 23:39:36.249: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07656585s Feb 2 23:39:38.254: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.081525672s Feb 2 23:39:38.254: INFO: Pod "pause" satisfied condition "running and ready" Feb 2 23:39:38.254: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: adding the label testing-label with value testing-label-value to a pod Feb 2 23:39:38.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 label pods pause testing-label=testing-label-value' Feb 2 23:39:38.361: INFO: stderr: "" Feb 2 23:39:38.361: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 2 23:39:38.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 get pod pause -L testing-label' Feb 2 23:39:38.451: INFO: stderr: "" Feb 2 23:39:38.451: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 2 23:39:38.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 label pods pause testing-label-' Feb 2 23:39:38.560: INFO: stderr: "" Feb 2 23:39:38.561: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 2 23:39:38.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 get pod pause -L testing-label' Feb 2 23:39:38.656: INFO: stderr: "" Feb 2 23:39:38.656: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Feb 2 23:39:38.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 delete --grace-period=0 --force -f -' Feb 2 23:39:38.760: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 23:39:38.760: INFO: stdout: "pod \"pause\" force deleted\n" Feb 2 23:39:38.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 get rc,svc -l name=pause --no-headers' Feb 2 23:39:38.872: INFO: stderr: "No resources found in kubectl-8512 namespace.\n" Feb 2 23:39:38.872: INFO: stdout: "" Feb 2 23:39:38.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8512 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 23:39:38.967: INFO: stderr: "" Feb 2 23:39:38.967: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:38.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8512" for this suite. • [SLOW TEST:5.491 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":309,"completed":174,"skipped":3027,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:39.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:39:43.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-16" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":309,"completed":175,"skipped":3028,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:39:43.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:39:43.532: INFO: Create a RollingUpdate DaemonSet Feb 2 23:39:43.537: INFO: Check that daemon pods launch on every node of the cluster Feb 2 23:39:43.552: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:39:43.562: INFO: Number of nodes with available pods: 0 Feb 2 23:39:43.562: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:39:44.590: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:39:44.629: INFO: Number of nodes with available pods: 0 Feb 2 23:39:44.629: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:39:45.790: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:39:45.793: INFO: Number of nodes with available pods: 0 Feb 2 23:39:45.793: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:39:46.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:39:46.570: INFO: Number of nodes with available pods: 0 Feb 2 23:39:46.570: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:39:47.568: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:39:47.571: INFO: Number of nodes with available pods: 0 Feb 2 23:39:47.572: INFO: Node leguer-worker is running more than one daemon pod Feb 2 23:39:48.571: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:39:48.582: INFO: Number of nodes with available pods: 2 Feb 2 23:39:48.582: INFO: Number of running nodes: 2, number of available pods: 2 Feb 2 23:39:48.582: INFO: Update the DaemonSet to trigger a rollout Feb 2 23:39:48.591: INFO: Updating DaemonSet daemon-set Feb 2 23:40:10.636: INFO: Roll back the DaemonSet before rollout is complete Feb 2 23:40:10.644: INFO: Updating DaemonSet daemon-set Feb 2 23:40:10.644: INFO: Make sure DaemonSet rollback is complete Feb 2 23:40:10.673: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:10.673: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:10.708: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:11.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:11.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:11.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:12.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:12.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:12.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:13.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:13.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:13.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:14.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:14.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:14.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:15.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:15.715: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:15.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:16.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:16.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:16.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:17.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:17.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:17.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:18.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:18.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:18.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:19.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:19.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:19.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:20.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:20.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:20.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:21.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:21.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:21.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:22.712: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:22.712: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:22.716: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:23.717: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:23.717: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:23.723: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:24.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:24.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:24.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:25.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:25.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:25.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:26.712: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:26.712: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:26.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:27.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:27.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:27.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:28.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:28.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:28.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:29.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:29.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:29.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:30.712: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:30.712: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:30.716: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:31.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:31.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:31.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:32.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:32.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:32.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:33.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:33.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:33.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:34.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:34.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:34.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:35.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:35.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:35.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:36.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:36.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:36.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:37.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:37.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:37.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:38.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:38.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:38.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:39.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:39.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:39.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:40.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:40.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:40.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:41.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:41.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:41.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:42.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:42.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:42.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:43.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:43.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:43.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:44.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:44.715: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:44.720: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:45.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:45.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:45.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:46.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:46.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:46.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:47.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:47.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:47.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:48.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:48.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:48.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:49.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:49.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:49.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:50.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:50.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:50.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:51.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:51.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:51.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:52.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:52.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:52.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:53.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:53.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:53.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:54.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:54.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:54.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:55.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:55.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:55.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:56.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:56.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:56.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:57.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:57.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:57.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:58.712: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:58.712: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:58.715: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:40:59.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:40:59.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:40:59.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:00.721: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:00.721: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:00.724: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:01.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:01.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:01.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:02.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:02.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:02.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:03.712: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:03.712: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:03.716: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:04.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:04.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:04.719: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:05.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:05.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:05.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:06.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:06.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:06.734: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:07.713: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:07.713: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:07.717: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:08.714: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:08.714: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:08.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:09.712: INFO: Wrong image for pod: daemon-set-w2bm9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 2 23:41:09.712: INFO: Pod daemon-set-w2bm9 is not available Feb 2 23:41:09.782: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:41:10.714: INFO: Pod daemon-set-5z5kr is not available Feb 2 23:41:10.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2359, will wait for the garbage collector to delete the pods Feb 2 23:41:10.785: INFO: Deleting DaemonSet.extensions daemon-set took: 6.950276ms Feb 2 23:41:11.785: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.00021033s Feb 2 23:42:20.195: INFO: Number of nodes with available pods: 0 Feb 2 23:42:20.195: INFO: Number of running nodes: 0, number of available pods: 0 Feb 2 23:42:20.198: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4185353"},"items":null} Feb 2 23:42:20.201: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4185353"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:42:20.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2359" for this suite. • [SLOW TEST:156.805 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":309,"completed":176,"skipped":3041,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:42:20.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:42:48.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2566" for this suite. • [SLOW TEST:28.155 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":309,"completed":177,"skipped":3044,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:42:48.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 2 23:42:48.503: INFO: Waiting up to 5m0s for pod "pod-4b99eb00-c891-4b82-b889-b976204eaa4a" in namespace "emptydir-7540" to be "Succeeded or Failed" Feb 2 23:42:48.512: INFO: Pod "pod-4b99eb00-c891-4b82-b889-b976204eaa4a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.586421ms Feb 2 23:42:50.517: INFO: Pod "pod-4b99eb00-c891-4b82-b889-b976204eaa4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013505792s Feb 2 23:42:52.527: INFO: Pod "pod-4b99eb00-c891-4b82-b889-b976204eaa4a": Phase="Running", Reason="", readiness=true. Elapsed: 4.023477896s Feb 2 23:42:54.532: INFO: Pod "pod-4b99eb00-c891-4b82-b889-b976204eaa4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028222599s STEP: Saw pod success Feb 2 23:42:54.532: INFO: Pod "pod-4b99eb00-c891-4b82-b889-b976204eaa4a" satisfied condition "Succeeded or Failed" Feb 2 23:42:54.535: INFO: Trying to get logs from node leguer-worker pod pod-4b99eb00-c891-4b82-b889-b976204eaa4a container test-container: STEP: delete the pod Feb 2 23:42:54.568: INFO: Waiting for pod pod-4b99eb00-c891-4b82-b889-b976204eaa4a to disappear Feb 2 23:42:54.585: INFO: Pod pod-4b99eb00-c891-4b82-b889-b976204eaa4a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:42:54.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7540" for this suite. • [SLOW TEST:6.219 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":178,"skipped":3048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:42:54.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's args Feb 2 23:42:54.709: INFO: Waiting up to 5m0s for pod "var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5" in namespace "var-expansion-8700" to be "Succeeded or Failed" Feb 2 23:42:54.729: INFO: Pod "var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.501917ms Feb 2 23:42:56.733: INFO: Pod "var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024293927s Feb 2 23:42:58.738: INFO: Pod "var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028720481s STEP: Saw pod success Feb 2 23:42:58.738: INFO: Pod "var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5" satisfied condition "Succeeded or Failed" Feb 2 23:42:58.744: INFO: Trying to get logs from node leguer-worker pod var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5 container dapi-container: STEP: delete the pod Feb 2 23:42:58.789: INFO: Waiting for pod var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5 to disappear Feb 2 23:42:58.800: INFO: Pod var-expansion-d0204c95-56ad-48b6-91da-315a9b7cf5c5 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:42:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8700" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":309,"completed":179,"skipped":3093,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:42:58.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0202 23:43:11.384092 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 2 23:44:13.402: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Feb 2 23:44:13.402: INFO: Deleting pod "simpletest-rc-to-be-deleted-2rznm" in namespace "gc-1158" Feb 2 23:44:13.414: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mb5k" in namespace "gc-1158" Feb 2 23:44:13.512: INFO: Deleting pod "simpletest-rc-to-be-deleted-5z8s5" in namespace "gc-1158" Feb 2 23:44:13.561: INFO: Deleting pod "simpletest-rc-to-be-deleted-7gz8x" in namespace "gc-1158" Feb 2 23:44:13.587: INFO: Deleting pod "simpletest-rc-to-be-deleted-cpxf5" in namespace "gc-1158" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:44:13.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1158" for this suite. • [SLOW TEST:75.379 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":309,"completed":180,"skipped":3093,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:44:14.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:44:14.771: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:44:15.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4105" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":309,"completed":181,"skipped":3095,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:44:15.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 2 23:44:16.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75" in namespace "projected-9274" to be "Succeeded or Failed" Feb 2 23:44:16.126: INFO: Pod "downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019215ms Feb 2 23:44:18.131: INFO: Pod "downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006197032s Feb 2 23:44:20.221: INFO: Pod "downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096589678s STEP: Saw pod success Feb 2 23:44:20.221: INFO: Pod "downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75" satisfied condition "Succeeded or Failed" Feb 2 23:44:20.225: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75 container client-container: STEP: delete the pod Feb 2 23:44:20.359: INFO: Waiting for pod downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75 to disappear Feb 2 23:44:20.372: INFO: Pod downwardapi-volume-0ee2e4e2-825d-47d0-b018-f3ba06f07a75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:44:20.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9274" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":182,"skipped":3098,"failed":0} ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:44:20.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:45:20.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7030" for this suite. STEP: Destroying namespace "nsdeletetest-9718" for this suite. Feb 2 23:45:20.898: INFO: Namespace nsdeletetest-9718 was already deleted STEP: Destroying namespace "nsdeletetest-998" for this suite. • [SLOW TEST:60.481 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":309,"completed":183,"skipped":3098,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:45:20.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:45:20.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Feb 2 23:45:21.592: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-02T23:45:21Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-02T23:45:21Z]] name:name1 resourceVersion:4186108 uid:e4cc6e3d-0136-4f12-91e3-5f89885286a8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Feb 2 23:45:31.601: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-02T23:45:31Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-02T23:45:31Z]] name:name2 resourceVersion:4186139 uid:d9a2e115-d1ae-42f3-a708-c21db624eed6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Feb 2 23:45:41.612: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-02T23:45:21Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-02T23:45:41Z]] name:name1 resourceVersion:4186159 uid:e4cc6e3d-0136-4f12-91e3-5f89885286a8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Feb 2 23:45:51.620: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-02T23:45:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-02T23:45:51Z]] name:name2 resourceVersion:4186179 uid:d9a2e115-d1ae-42f3-a708-c21db624eed6] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Feb 2 23:46:01.633: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-02T23:45:21Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-02T23:45:41Z]] name:name1 resourceVersion:4186199 uid:e4cc6e3d-0136-4f12-91e3-5f89885286a8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Feb 2 23:46:11.643: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-02T23:45:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-02T23:45:51Z]] name:name2 resourceVersion:4186219 uid:d9a2e115-d1ae-42f3-a708-c21db624eed6] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:46:22.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-448" for this suite. • [SLOW TEST:61.259 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":309,"completed":184,"skipped":3111,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:46:22.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:46:22.728: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:46:24.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 23:46:26.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906382, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:46:29.770: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:46:30.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6388" for this suite. STEP: Destroying namespace "webhook-6388-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.308 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":309,"completed":185,"skipped":3117,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:46:30.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0202 23:47:11.002759 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 2 23:48:13.023: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Feb 2 23:48:13.023: INFO: Deleting pod "simpletest.rc-4vbmz" in namespace "gc-7723" Feb 2 23:48:13.050: INFO: Deleting pod "simpletest.rc-6xrl6" in namespace "gc-7723" Feb 2 23:48:13.086: INFO: Deleting pod "simpletest.rc-84p46" in namespace "gc-7723" Feb 2 23:48:13.170: INFO: Deleting pod "simpletest.rc-d656w" in namespace "gc-7723" Feb 2 23:48:13.529: INFO: Deleting pod "simpletest.rc-dd5w5" in namespace "gc-7723" Feb 2 23:48:13.744: INFO: Deleting pod "simpletest.rc-kthrk" in namespace "gc-7723" Feb 2 23:48:13.793: INFO: Deleting pod "simpletest.rc-pft4r" in namespace "gc-7723" Feb 2 23:48:14.061: INFO: Deleting pod "simpletest.rc-qmvh6" in namespace "gc-7723" Feb 2 23:48:14.248: INFO: Deleting pod "simpletest.rc-qnk22" in namespace "gc-7723" Feb 2 23:48:14.574: INFO: Deleting pod "simpletest.rc-xt4dl" in namespace "gc-7723" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:48:14.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7723" for this suite. • [SLOW TEST:104.785 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":309,"completed":186,"skipped":3118,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:48:15.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8652.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8652.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:48:26.137: INFO: DNS probes using dns-8652/dns-test-03a64c8c-72bb-40e0-ac6f-22e14cc45e68 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:48:26.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8652" for this suite. • [SLOW TEST:11.012 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":309,"completed":187,"skipped":3130,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:48:26.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-projected-cntg STEP: Creating a pod to test atomic-volume-subpath Feb 2 23:48:26.756: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cntg" in namespace "subpath-7786" to be "Succeeded or Failed" Feb 2 23:48:26.827: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Pending", Reason="", readiness=false. Elapsed: 70.265462ms Feb 2 23:48:28.830: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073665042s Feb 2 23:48:30.835: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 4.078310805s Feb 2 23:48:32.840: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 6.083731087s Feb 2 23:48:34.845: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 8.088353077s Feb 2 23:48:36.849: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 10.092507524s Feb 2 23:48:38.853: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 12.096600496s Feb 2 23:48:40.857: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 14.100211474s Feb 2 23:48:42.861: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 16.104696706s Feb 2 23:48:44.866: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 18.109263472s Feb 2 23:48:46.871: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 20.114446674s Feb 2 23:48:48.875: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Running", Reason="", readiness=true. Elapsed: 22.118555488s Feb 2 23:48:50.880: INFO: Pod "pod-subpath-test-projected-cntg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.123194648s STEP: Saw pod success Feb 2 23:48:50.880: INFO: Pod "pod-subpath-test-projected-cntg" satisfied condition "Succeeded or Failed" Feb 2 23:48:50.882: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-projected-cntg container test-container-subpath-projected-cntg: STEP: delete the pod Feb 2 23:48:50.910: INFO: Waiting for pod pod-subpath-test-projected-cntg to disappear Feb 2 23:48:50.915: INFO: Pod pod-subpath-test-projected-cntg no longer exists STEP: Deleting pod pod-subpath-test-projected-cntg Feb 2 23:48:50.915: INFO: Deleting pod "pod-subpath-test-projected-cntg" in namespace "subpath-7786" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:48:50.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7786" for this suite. • [SLOW TEST:24.816 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":309,"completed":188,"skipped":3144,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:48:51.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 2 23:48:51.790: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 2 23:48:53.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906531, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906531, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906531, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906531, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:48:56.837: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:48:56.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:48:58.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3474" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.097 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":309,"completed":189,"skipped":3149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:48:58.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-603563d5-2ebb-4384-8881-1b5c9bc619f5 STEP: Creating a pod to test consume secrets Feb 2 23:48:58.314: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a" in namespace "projected-7599" to be "Succeeded or Failed" Feb 2 23:48:58.323: INFO: Pod "pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473506ms Feb 2 23:49:00.327: INFO: Pod "pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012899171s Feb 2 23:49:02.331: INFO: Pod "pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017092215s STEP: Saw pod success Feb 2 23:49:02.331: INFO: Pod "pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a" satisfied condition "Succeeded or Failed" Feb 2 23:49:02.333: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a container projected-secret-volume-test: STEP: delete the pod Feb 2 23:49:02.375: INFO: Waiting for pod pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a to disappear Feb 2 23:49:02.385: INFO: Pod pod-projected-secrets-c4ce7fbb-28cb-4b34-9120-14559538259a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:49:02.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7599" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":190,"skipped":3188,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:49:02.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-4657/configmap-test-4edfa203-57ee-4447-ad9f-7171424eb336 STEP: Creating a pod to test consume configMaps Feb 2 23:49:02.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51" in namespace "configmap-4657" to be "Succeeded or Failed" Feb 2 23:49:02.834: INFO: Pod "pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51": Phase="Pending", Reason="", readiness=false. Elapsed: 3.919197ms Feb 2 23:49:04.838: INFO: Pod "pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008407868s Feb 2 23:49:06.851: INFO: Pod "pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020741621s STEP: Saw pod success Feb 2 23:49:06.851: INFO: Pod "pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51" satisfied condition "Succeeded or Failed" Feb 2 23:49:06.854: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51 container env-test: STEP: delete the pod Feb 2 23:49:06.897: INFO: Waiting for pod pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51 to disappear Feb 2 23:49:06.906: INFO: Pod pod-configmaps-568c6540-d5b3-4014-9eed-a233091e6a51 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:49:06.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4657" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":309,"completed":191,"skipped":3198,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:49:06.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:49:14.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9253" for this suite. • [SLOW TEST:7.115 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":309,"completed":192,"skipped":3205,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:49:14.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-9e3d4d1e-2a6a-4009-9e36-5cc43637d028 in namespace container-probe-3549 Feb 2 23:49:20.198: INFO: Started pod busybox-9e3d4d1e-2a6a-4009-9e36-5cc43637d028 in namespace container-probe-3549 STEP: checking the pod's current state and verifying that restartCount is present Feb 2 23:49:20.254: INFO: Initial restart count of pod busybox-9e3d4d1e-2a6a-4009-9e36-5cc43637d028 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:53:21.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3549" for this suite. • [SLOW TEST:247.075 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":193,"skipped":3225,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:53:21.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test env composition Feb 2 23:53:21.238: INFO: Waiting up to 5m0s for pod "var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d" in namespace "var-expansion-5586" to be "Succeeded or Failed" Feb 2 23:53:21.291: INFO: Pod "var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.798499ms Feb 2 23:53:23.295: INFO: Pod "var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057043341s Feb 2 23:53:25.299: INFO: Pod "var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061077764s Feb 2 23:53:27.304: INFO: Pod "var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066311452s STEP: Saw pod success Feb 2 23:53:27.304: INFO: Pod "var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d" satisfied condition "Succeeded or Failed" Feb 2 23:53:27.308: INFO: Trying to get logs from node leguer-worker pod var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d container dapi-container: STEP: delete the pod Feb 2 23:53:27.363: INFO: Waiting for pod var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d to disappear Feb 2 23:53:27.376: INFO: Pod var-expansion-db039c8d-ac54-4a7c-a097-48b3c98af26d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:53:27.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5586" for this suite. • [SLOW TEST:6.282 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":309,"completed":194,"skipped":3236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:53:27.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:53:27.487: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3beeca70-dd37-4beb-a59a-e110c979f6c3" in namespace "security-context-test-3068" to be "Succeeded or Failed" Feb 2 23:53:27.510: INFO: Pod "busybox-user-65534-3beeca70-dd37-4beb-a59a-e110c979f6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.362706ms Feb 2 23:53:29.602: INFO: Pod "busybox-user-65534-3beeca70-dd37-4beb-a59a-e110c979f6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115153071s Feb 2 23:53:31.607: INFO: Pod "busybox-user-65534-3beeca70-dd37-4beb-a59a-e110c979f6c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120064412s Feb 2 23:53:31.607: INFO: Pod "busybox-user-65534-3beeca70-dd37-4beb-a59a-e110c979f6c3" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:53:31.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3068" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":195,"skipped":3260,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:53:31.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-9ba04ff5-b110-4495-99b2-bfa59825e79d STEP: Creating a pod to test consume configMaps Feb 2 23:53:31.933: INFO: Waiting up to 5m0s for pod "pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570" in namespace "configmap-3020" to be "Succeeded or Failed" Feb 2 23:53:31.937: INFO: Pod "pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917543ms Feb 2 23:53:33.942: INFO: Pod "pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00826443s Feb 2 23:53:35.946: INFO: Pod "pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012276501s STEP: Saw pod success Feb 2 23:53:35.946: INFO: Pod "pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570" satisfied condition "Succeeded or Failed" Feb 2 23:53:35.948: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570 container agnhost-container: STEP: delete the pod Feb 2 23:53:36.023: INFO: Waiting for pod pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570 to disappear Feb 2 23:53:36.026: INFO: Pod pod-configmaps-ade7eb2b-1316-4e1c-a4be-494f39f6c570 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:53:36.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3020" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":196,"skipped":3260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:53:36.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 2 23:53:44.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 2 23:53:44.183: INFO: Pod pod-with-prestop-exec-hook still exists Feb 2 23:53:46.183: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 2 23:53:46.188: INFO: Pod pod-with-prestop-exec-hook still exists Feb 2 23:53:48.183: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 2 23:53:48.188: INFO: Pod pod-with-prestop-exec-hook still exists Feb 2 23:53:50.183: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 2 23:53:50.213: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:53:50.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7253" for this suite. • [SLOW TEST:14.192 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":309,"completed":197,"skipped":3309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:53:50.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7276 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7276 I0202 23:53:50.432538 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7276, replica count: 2 I0202 23:53:53.482973 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:53:56.483268 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:53:56.483: INFO: Creating new exec pod Feb 2 23:54:01.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7276 exec execpod77bpc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 2 23:54:05.219: INFO: stderr: "I0202 23:54:05.114918 2867 log.go:181] (0xc000dc80b0) (0xc0001f8000) Create stream\nI0202 23:54:05.115001 2867 log.go:181] (0xc000dc80b0) (0xc0001f8000) Stream added, broadcasting: 1\nI0202 23:54:05.119833 2867 log.go:181] (0xc000dc80b0) Reply frame received for 1\nI0202 23:54:05.119895 2867 log.go:181] (0xc000dc80b0) (0xc000ad6000) Create stream\nI0202 23:54:05.119909 2867 log.go:181] (0xc000dc80b0) (0xc000ad6000) Stream added, broadcasting: 3\nI0202 23:54:05.121146 2867 log.go:181] (0xc000dc80b0) Reply frame received for 3\nI0202 23:54:05.121187 2867 log.go:181] (0xc000dc80b0) (0xc0005fc000) Create stream\nI0202 23:54:05.121206 2867 log.go:181] (0xc000dc80b0) (0xc0005fc000) Stream added, broadcasting: 5\nI0202 23:54:05.122411 2867 log.go:181] (0xc000dc80b0) Reply frame received for 5\nI0202 23:54:05.211575 2867 log.go:181] (0xc000dc80b0) Data frame received for 5\nI0202 23:54:05.211606 2867 log.go:181] (0xc0005fc000) (5) Data frame handling\nI0202 23:54:05.211625 2867 log.go:181] (0xc0005fc000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0202 23:54:05.211689 2867 log.go:181] (0xc000dc80b0) Data frame received for 5\nI0202 23:54:05.211705 2867 log.go:181] (0xc0005fc000) (5) Data frame handling\nI0202 23:54:05.211715 2867 log.go:181] (0xc0005fc000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0202 23:54:05.212011 2867 log.go:181] (0xc000dc80b0) Data frame received for 3\nI0202 23:54:05.212039 2867 log.go:181] (0xc000ad6000) (3) Data frame handling\nI0202 23:54:05.212376 2867 log.go:181] (0xc000dc80b0) Data frame received for 5\nI0202 23:54:05.212402 2867 log.go:181] (0xc0005fc000) (5) Data frame handling\nI0202 23:54:05.214221 2867 log.go:181] (0xc000dc80b0) Data frame received for 1\nI0202 23:54:05.214239 2867 log.go:181] (0xc0001f8000) (1) Data frame handling\nI0202 23:54:05.214251 2867 log.go:181] (0xc0001f8000) (1) Data frame sent\nI0202 23:54:05.214267 2867 log.go:181] (0xc000dc80b0) (0xc0001f8000) Stream removed, broadcasting: 1\nI0202 23:54:05.214278 2867 log.go:181] (0xc000dc80b0) Go away received\nI0202 23:54:05.214730 2867 log.go:181] (0xc000dc80b0) (0xc0001f8000) Stream removed, broadcasting: 1\nI0202 23:54:05.214751 2867 log.go:181] (0xc000dc80b0) (0xc000ad6000) Stream removed, broadcasting: 3\nI0202 23:54:05.214759 2867 log.go:181] (0xc000dc80b0) (0xc0005fc000) Stream removed, broadcasting: 5\n" Feb 2 23:54:05.220: INFO: stdout: "" Feb 2 23:54:05.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7276 exec execpod77bpc -- /bin/sh -x -c nc -zv -t -w 2 10.96.49.191 80' Feb 2 23:54:05.435: INFO: stderr: "I0202 23:54:05.351117 2885 log.go:181] (0xc00018c370) (0xc000207ae0) Create stream\nI0202 23:54:05.351189 2885 log.go:181] (0xc00018c370) (0xc000207ae0) Stream added, broadcasting: 1\nI0202 23:54:05.353409 2885 log.go:181] (0xc00018c370) Reply frame received for 1\nI0202 23:54:05.353459 2885 log.go:181] (0xc00018c370) (0xc000d2e1e0) Create stream\nI0202 23:54:05.353475 2885 log.go:181] (0xc00018c370) (0xc000d2e1e0) Stream added, broadcasting: 3\nI0202 23:54:05.354257 2885 log.go:181] (0xc00018c370) Reply frame received for 3\nI0202 23:54:05.354294 2885 log.go:181] (0xc00018c370) (0xc000bb8460) Create stream\nI0202 23:54:05.354307 2885 log.go:181] (0xc00018c370) (0xc000bb8460) Stream added, broadcasting: 5\nI0202 23:54:05.355122 2885 log.go:181] (0xc00018c370) Reply frame received for 5\nI0202 23:54:05.429253 2885 log.go:181] (0xc00018c370) Data frame received for 3\nI0202 23:54:05.429290 2885 log.go:181] (0xc000d2e1e0) (3) Data frame handling\nI0202 23:54:05.429316 2885 log.go:181] (0xc00018c370) Data frame received for 5\nI0202 23:54:05.429328 2885 log.go:181] (0xc000bb8460) (5) Data frame handling\nI0202 23:54:05.429338 2885 log.go:181] (0xc000bb8460) (5) Data frame sent\nI0202 23:54:05.429344 2885 log.go:181] (0xc00018c370) Data frame received for 5\nI0202 23:54:05.429348 2885 log.go:181] (0xc000bb8460) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.49.191 80\nConnection to 10.96.49.191 80 port [tcp/http] succeeded!\nI0202 23:54:05.430510 2885 log.go:181] (0xc00018c370) Data frame received for 1\nI0202 23:54:05.430526 2885 log.go:181] (0xc000207ae0) (1) Data frame handling\nI0202 23:54:05.430537 2885 log.go:181] (0xc000207ae0) (1) Data frame sent\nI0202 23:54:05.430546 2885 log.go:181] (0xc00018c370) (0xc000207ae0) Stream removed, broadcasting: 1\nI0202 23:54:05.430557 2885 log.go:181] (0xc00018c370) Go away received\nI0202 23:54:05.430878 2885 log.go:181] (0xc00018c370) (0xc000207ae0) Stream removed, broadcasting: 1\nI0202 23:54:05.430894 2885 log.go:181] (0xc00018c370) (0xc000d2e1e0) Stream removed, broadcasting: 3\nI0202 23:54:05.430902 2885 log.go:181] (0xc00018c370) (0xc000bb8460) Stream removed, broadcasting: 5\n" Feb 2 23:54:05.435: INFO: stdout: "" Feb 2 23:54:05.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7276 exec execpod77bpc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32424' Feb 2 23:54:05.656: INFO: stderr: "I0202 23:54:05.578612 2903 log.go:181] (0xc0000f6000) (0xc000386000) Create stream\nI0202 23:54:05.578692 2903 log.go:181] (0xc0000f6000) (0xc000386000) Stream added, broadcasting: 1\nI0202 23:54:05.581543 2903 log.go:181] (0xc0000f6000) Reply frame received for 1\nI0202 23:54:05.581602 2903 log.go:181] (0xc0000f6000) (0xc0004fe460) Create stream\nI0202 23:54:05.581619 2903 log.go:181] (0xc0000f6000) (0xc0004fe460) Stream added, broadcasting: 3\nI0202 23:54:05.582695 2903 log.go:181] (0xc0000f6000) Reply frame received for 3\nI0202 23:54:05.582733 2903 log.go:181] (0xc0000f6000) (0xc000386820) Create stream\nI0202 23:54:05.582748 2903 log.go:181] (0xc0000f6000) (0xc000386820) Stream added, broadcasting: 5\nI0202 23:54:05.583747 2903 log.go:181] (0xc0000f6000) Reply frame received for 5\nI0202 23:54:05.650048 2903 log.go:181] (0xc0000f6000) Data frame received for 5\nI0202 23:54:05.650086 2903 log.go:181] (0xc000386820) (5) Data frame handling\nI0202 23:54:05.650105 2903 log.go:181] (0xc000386820) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 32424\nConnection to 172.18.0.13 32424 port [tcp/32424] succeeded!\nI0202 23:54:05.650204 2903 log.go:181] (0xc0000f6000) Data frame received for 3\nI0202 23:54:05.650237 2903 log.go:181] (0xc0004fe460) (3) Data frame handling\nI0202 23:54:05.650394 2903 log.go:181] (0xc0000f6000) Data frame received for 5\nI0202 23:54:05.650416 2903 log.go:181] (0xc000386820) (5) Data frame handling\nI0202 23:54:05.651823 2903 log.go:181] (0xc0000f6000) Data frame received for 1\nI0202 23:54:05.651852 2903 log.go:181] (0xc000386000) (1) Data frame handling\nI0202 23:54:05.651874 2903 log.go:181] (0xc000386000) (1) Data frame sent\nI0202 23:54:05.651893 2903 log.go:181] (0xc0000f6000) (0xc000386000) Stream removed, broadcasting: 1\nI0202 23:54:05.651924 2903 log.go:181] (0xc0000f6000) Go away received\nI0202 23:54:05.652440 2903 log.go:181] (0xc0000f6000) (0xc000386000) Stream removed, broadcasting: 1\nI0202 23:54:05.652458 2903 log.go:181] (0xc0000f6000) (0xc0004fe460) Stream removed, broadcasting: 3\nI0202 23:54:05.652467 2903 log.go:181] (0xc0000f6000) (0xc000386820) Stream removed, broadcasting: 5\n" Feb 2 23:54:05.656: INFO: stdout: "" Feb 2 23:54:05.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7276 exec execpod77bpc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32424' Feb 2 23:54:05.857: INFO: stderr: "I0202 23:54:05.783484 2921 log.go:181] (0xc000e9f8c0) (0xc000e96be0) Create stream\nI0202 23:54:05.783530 2921 log.go:181] (0xc000e9f8c0) (0xc000e96be0) Stream added, broadcasting: 1\nI0202 23:54:05.788685 2921 log.go:181] (0xc000e9f8c0) Reply frame received for 1\nI0202 23:54:05.788727 2921 log.go:181] (0xc000e9f8c0) (0xc000e96000) Create stream\nI0202 23:54:05.788743 2921 log.go:181] (0xc000e9f8c0) (0xc000e96000) Stream added, broadcasting: 3\nI0202 23:54:05.790104 2921 log.go:181] (0xc000e9f8c0) Reply frame received for 3\nI0202 23:54:05.790143 2921 log.go:181] (0xc000e9f8c0) (0xc000b7e000) Create stream\nI0202 23:54:05.790156 2921 log.go:181] (0xc000e9f8c0) (0xc000b7e000) Stream added, broadcasting: 5\nI0202 23:54:05.792617 2921 log.go:181] (0xc000e9f8c0) Reply frame received for 5\nI0202 23:54:05.849338 2921 log.go:181] (0xc000e9f8c0) Data frame received for 3\nI0202 23:54:05.849397 2921 log.go:181] (0xc000e96000) (3) Data frame handling\nI0202 23:54:05.849436 2921 log.go:181] (0xc000e9f8c0) Data frame received for 5\nI0202 23:54:05.849455 2921 log.go:181] (0xc000b7e000) (5) Data frame handling\nI0202 23:54:05.849485 2921 log.go:181] (0xc000b7e000) (5) Data frame sent\nI0202 23:54:05.849504 2921 log.go:181] (0xc000e9f8c0) Data frame received for 5\nI0202 23:54:05.849519 2921 log.go:181] (0xc000b7e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32424\nConnection to 172.18.0.12 32424 port [tcp/32424] succeeded!\nI0202 23:54:05.851017 2921 log.go:181] (0xc000e9f8c0) Data frame received for 1\nI0202 23:54:05.851052 2921 log.go:181] (0xc000e96be0) (1) Data frame handling\nI0202 23:54:05.851087 2921 log.go:181] (0xc000e96be0) (1) Data frame sent\nI0202 23:54:05.851109 2921 log.go:181] (0xc000e9f8c0) (0xc000e96be0) Stream removed, broadcasting: 1\nI0202 23:54:05.851131 2921 log.go:181] (0xc000e9f8c0) Go away received\nI0202 23:54:05.851710 2921 log.go:181] (0xc000e9f8c0) (0xc000e96be0) Stream removed, broadcasting: 1\nI0202 23:54:05.851753 2921 log.go:181] (0xc000e9f8c0) (0xc000e96000) Stream removed, broadcasting: 3\nI0202 23:54:05.851779 2921 log.go:181] (0xc000e9f8c0) (0xc000b7e000) Stream removed, broadcasting: 5\n" Feb 2 23:54:05.857: INFO: stdout: "" Feb 2 23:54:05.857: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:54:05.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7276" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:15.681 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":309,"completed":198,"skipped":3346,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:54:05.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 2 23:54:06.544: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 2 23:54:08.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906846, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906846, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906846, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747906846, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 2 23:54:11.680: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:54:11.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6056" for this suite. STEP: Destroying namespace "webhook-6056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.122 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":309,"completed":199,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:54:12.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-8c43d199-feb1-4758-aee9-cc52da3f300c STEP: Creating a pod to test consume configMaps Feb 2 23:54:12.461: INFO: Waiting up to 5m0s for pod "pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4" in namespace "configmap-2270" to be "Succeeded or Failed" Feb 2 23:54:12.495: INFO: Pod "pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.107658ms Feb 2 23:54:14.527: INFO: Pod "pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065184295s Feb 2 23:54:16.535: INFO: Pod "pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074051501s Feb 2 23:54:18.541: INFO: Pod "pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079206105s STEP: Saw pod success Feb 2 23:54:18.541: INFO: Pod "pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4" satisfied condition "Succeeded or Failed" Feb 2 23:54:18.544: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4 container agnhost-container: STEP: delete the pod Feb 2 23:54:18.628: INFO: Waiting for pod pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4 to disappear Feb 2 23:54:18.639: INFO: Pod pod-configmaps-53587475-c027-446d-a3c3-51155c9244a4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:54:18.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2270" for this suite. • [SLOW TEST:6.618 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":200,"skipped":3401,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:54:18.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9272 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9272;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9272 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9272;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9272.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9272.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9272.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9272.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9272.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9272.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9272.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.235_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9272 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9272;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9272 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9272;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9272.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9272.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9272.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9272.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9272.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9272.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9272.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9272.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9272.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.249.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.249.235_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 23:54:24.962: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.965: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.973: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.978: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:24.982: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.009: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.011: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.013: INFO: Unable to read jessie_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.021: INFO: Unable to read jessie_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.024: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.027: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:25.044: INFO: Lookups using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9272 wheezy_tcp@dns-test-service.dns-9272 wheezy_udp@dns-test-service.dns-9272.svc wheezy_tcp@dns-test-service.dns-9272.svc wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9272 jessie_tcp@dns-test-service.dns-9272 jessie_udp@dns-test-service.dns-9272.svc jessie_tcp@dns-test-service.dns-9272.svc jessie_udp@_http._tcp.dns-test-service.dns-9272.svc jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc] Feb 2 23:54:30.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.056: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.065: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.068: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.071: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.092: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.094: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.096: INFO: Unable to read jessie_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.099: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.101: INFO: Unable to read jessie_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.104: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.107: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.110: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:30.126: INFO: Lookups using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9272 wheezy_tcp@dns-test-service.dns-9272 wheezy_udp@dns-test-service.dns-9272.svc wheezy_tcp@dns-test-service.dns-9272.svc wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9272 jessie_tcp@dns-test-service.dns-9272 jessie_udp@dns-test-service.dns-9272.svc jessie_tcp@dns-test-service.dns-9272.svc jessie_udp@_http._tcp.dns-test-service.dns-9272.svc jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc] Feb 2 23:54:35.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.056: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.065: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.068: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.072: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.093: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.096: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.099: INFO: Unable to read jessie_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.105: INFO: Unable to read jessie_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:35.134: INFO: Lookups using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9272 wheezy_tcp@dns-test-service.dns-9272 wheezy_udp@dns-test-service.dns-9272.svc wheezy_tcp@dns-test-service.dns-9272.svc wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9272 jessie_tcp@dns-test-service.dns-9272 jessie_udp@dns-test-service.dns-9272.svc jessie_tcp@dns-test-service.dns-9272.svc jessie_udp@_http._tcp.dns-test-service.dns-9272.svc jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc] Feb 2 23:54:40.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.052: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.058: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.065: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.068: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.072: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.093: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.096: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.099: INFO: Unable to read jessie_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.105: INFO: Unable to read jessie_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.108: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.111: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.114: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:40.136: INFO: Lookups using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9272 wheezy_tcp@dns-test-service.dns-9272 wheezy_udp@dns-test-service.dns-9272.svc wheezy_tcp@dns-test-service.dns-9272.svc wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9272 jessie_tcp@dns-test-service.dns-9272 jessie_udp@dns-test-service.dns-9272.svc jessie_tcp@dns-test-service.dns-9272.svc jessie_udp@_http._tcp.dns-test-service.dns-9272.svc jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc] Feb 2 23:54:45.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.054: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.057: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.060: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.063: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.067: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.069: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.072: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.125: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.128: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.132: INFO: Unable to read jessie_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.138: INFO: Unable to read jessie_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.141: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.144: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.146: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:45.164: INFO: Lookups using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9272 wheezy_tcp@dns-test-service.dns-9272 wheezy_udp@dns-test-service.dns-9272.svc wheezy_tcp@dns-test-service.dns-9272.svc wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9272 jessie_tcp@dns-test-service.dns-9272 jessie_udp@dns-test-service.dns-9272.svc jessie_tcp@dns-test-service.dns-9272.svc jessie_udp@_http._tcp.dns-test-service.dns-9272.svc jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc] Feb 2 23:54:50.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.058: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.064: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.070: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.095: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.098: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.101: INFO: Unable to read jessie_udp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.104: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272 from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.107: INFO: Unable to read jessie_udp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.114: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.117: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc from pod dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7: the server could not find the requested resource (get pods dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7) Feb 2 23:54:50.135: INFO: Lookups using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9272 wheezy_tcp@dns-test-service.dns-9272 wheezy_udp@dns-test-service.dns-9272.svc wheezy_tcp@dns-test-service.dns-9272.svc wheezy_udp@_http._tcp.dns-test-service.dns-9272.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9272.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9272 jessie_tcp@dns-test-service.dns-9272 jessie_udp@dns-test-service.dns-9272.svc jessie_tcp@dns-test-service.dns-9272.svc jessie_udp@_http._tcp.dns-test-service.dns-9272.svc jessie_tcp@_http._tcp.dns-test-service.dns-9272.svc] Feb 2 23:54:55.196: INFO: DNS probes using dns-9272/dns-test-a9f97f0f-9299-4a19-bb68-39810ffc1be7 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:54:55.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9272" for this suite. • [SLOW TEST:37.272 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":309,"completed":201,"skipped":3415,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:54:55.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:54:56.010: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:54:57.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1037" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":309,"completed":202,"skipped":3426,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:54:57.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-3218 STEP: creating service affinity-nodeport-transition in namespace services-3218 STEP: creating replication controller affinity-nodeport-transition in namespace services-3218 I0202 23:54:57.683373 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3218, replica count: 3 I0202 23:55:00.733764 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:55:03.734036 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 23:55:03.747: INFO: Creating new exec pod Feb 2 23:55:08.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3218 exec execpod-affinity4wxfv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Feb 2 23:55:09.039: INFO: stderr: "I0202 23:55:08.925190 2939 log.go:181] (0xc000e88000) (0xc0007d0000) Create stream\nI0202 23:55:08.925267 2939 log.go:181] (0xc000e88000) (0xc0007d0000) Stream added, broadcasting: 1\nI0202 23:55:08.927348 2939 log.go:181] (0xc000e88000) Reply frame received for 1\nI0202 23:55:08.927396 2939 log.go:181] (0xc000e88000) (0xc0007d00a0) Create stream\nI0202 23:55:08.927417 2939 log.go:181] (0xc000e88000) (0xc0007d00a0) Stream added, broadcasting: 3\nI0202 23:55:08.928361 2939 log.go:181] (0xc000e88000) Reply frame received for 3\nI0202 23:55:08.928400 2939 log.go:181] (0xc000e88000) (0xc000cc6460) Create stream\nI0202 23:55:08.928411 2939 log.go:181] (0xc000e88000) (0xc000cc6460) Stream added, broadcasting: 5\nI0202 23:55:08.929421 2939 log.go:181] (0xc000e88000) Reply frame received for 5\nI0202 23:55:09.030696 2939 log.go:181] (0xc000e88000) Data frame received for 5\nI0202 23:55:09.030732 2939 log.go:181] (0xc000cc6460) (5) Data frame handling\nI0202 23:55:09.030752 2939 log.go:181] (0xc000cc6460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0202 23:55:09.030779 2939 log.go:181] (0xc000e88000) Data frame received for 5\nI0202 23:55:09.030806 2939 log.go:181] (0xc000cc6460) (5) Data frame handling\nI0202 23:55:09.030827 2939 log.go:181] (0xc000cc6460) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0202 23:55:09.031169 2939 log.go:181] (0xc000e88000) Data frame received for 5\nI0202 23:55:09.031197 2939 log.go:181] (0xc000cc6460) (5) Data frame handling\nI0202 23:55:09.031246 2939 log.go:181] (0xc000e88000) Data frame received for 3\nI0202 23:55:09.031266 2939 log.go:181] (0xc0007d00a0) (3) Data frame handling\nI0202 23:55:09.033679 2939 log.go:181] (0xc000e88000) Data frame received for 1\nI0202 23:55:09.033737 2939 log.go:181] (0xc0007d0000) (1) Data frame handling\nI0202 23:55:09.033787 2939 log.go:181] (0xc0007d0000) (1) Data frame sent\nI0202 23:55:09.033819 2939 log.go:181] (0xc000e88000) (0xc0007d0000) Stream removed, broadcasting: 1\nI0202 23:55:09.033842 2939 log.go:181] (0xc000e88000) Go away received\nI0202 23:55:09.034241 2939 log.go:181] (0xc000e88000) (0xc0007d0000) Stream removed, broadcasting: 1\nI0202 23:55:09.034260 2939 log.go:181] (0xc000e88000) (0xc0007d00a0) Stream removed, broadcasting: 3\nI0202 23:55:09.034270 2939 log.go:181] (0xc000e88000) (0xc000cc6460) Stream removed, broadcasting: 5\n" Feb 2 23:55:09.039: INFO: stdout: "" Feb 2 23:55:09.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3218 exec execpod-affinity4wxfv -- /bin/sh -x -c nc -zv -t -w 2 10.96.149.69 80' Feb 2 23:55:09.246: INFO: stderr: "I0202 23:55:09.171512 2957 log.go:181] (0xc00018d080) (0xc000b200a0) Create stream\nI0202 23:55:09.171579 2957 log.go:181] (0xc00018d080) (0xc000b200a0) Stream added, broadcasting: 1\nI0202 23:55:09.173718 2957 log.go:181] (0xc00018d080) Reply frame received for 1\nI0202 23:55:09.173763 2957 log.go:181] (0xc00018d080) (0xc000b20140) Create stream\nI0202 23:55:09.173781 2957 log.go:181] (0xc00018d080) (0xc000b20140) Stream added, broadcasting: 3\nI0202 23:55:09.174881 2957 log.go:181] (0xc00018d080) Reply frame received for 3\nI0202 23:55:09.174932 2957 log.go:181] (0xc00018d080) (0xc00015e3c0) Create stream\nI0202 23:55:09.174956 2957 log.go:181] (0xc00018d080) (0xc00015e3c0) Stream added, broadcasting: 5\nI0202 23:55:09.175826 2957 log.go:181] (0xc00018d080) Reply frame received for 5\nI0202 23:55:09.238414 2957 log.go:181] (0xc00018d080) Data frame received for 3\nI0202 23:55:09.238459 2957 log.go:181] (0xc00018d080) Data frame received for 5\nI0202 23:55:09.238486 2957 log.go:181] (0xc00015e3c0) (5) Data frame handling\nI0202 23:55:09.238501 2957 log.go:181] (0xc00015e3c0) (5) Data frame sent\nI0202 23:55:09.238540 2957 log.go:181] (0xc00018d080) Data frame received for 5\nI0202 23:55:09.238553 2957 log.go:181] (0xc00015e3c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.149.69 80\nConnection to 10.96.149.69 80 port [tcp/http] succeeded!\nI0202 23:55:09.238585 2957 log.go:181] (0xc000b20140) (3) Data frame handling\nI0202 23:55:09.240069 2957 log.go:181] (0xc00018d080) Data frame received for 1\nI0202 23:55:09.240191 2957 log.go:181] (0xc000b200a0) (1) Data frame handling\nI0202 23:55:09.240223 2957 log.go:181] (0xc000b200a0) (1) Data frame sent\nI0202 23:55:09.240242 2957 log.go:181] (0xc00018d080) (0xc000b200a0) Stream removed, broadcasting: 1\nI0202 23:55:09.240262 2957 log.go:181] (0xc00018d080) Go away received\nI0202 23:55:09.240677 2957 log.go:181] (0xc00018d080) (0xc000b200a0) Stream removed, broadcasting: 1\nI0202 23:55:09.240696 2957 log.go:181] (0xc00018d080) (0xc000b20140) Stream removed, broadcasting: 3\nI0202 23:55:09.240704 2957 log.go:181] (0xc00018d080) (0xc00015e3c0) Stream removed, broadcasting: 5\n" Feb 2 23:55:09.246: INFO: stdout: "" Feb 2 23:55:09.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3218 exec execpod-affinity4wxfv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32238' Feb 2 23:55:09.484: INFO: stderr: "I0202 23:55:09.406231 2975 log.go:181] (0xc000a31130) (0xc000c2ab40) Create stream\nI0202 23:55:09.406339 2975 log.go:181] (0xc000a31130) (0xc000c2ab40) Stream added, broadcasting: 1\nI0202 23:55:09.409753 2975 log.go:181] (0xc000a31130) Reply frame received for 1\nI0202 23:55:09.409787 2975 log.go:181] (0xc000a31130) (0xc000b7a000) Create stream\nI0202 23:55:09.409799 2975 log.go:181] (0xc000a31130) (0xc000b7a000) Stream added, broadcasting: 3\nI0202 23:55:09.410541 2975 log.go:181] (0xc000a31130) Reply frame received for 3\nI0202 23:55:09.410566 2975 log.go:181] (0xc000a31130) (0xc000c2a000) Create stream\nI0202 23:55:09.410573 2975 log.go:181] (0xc000a31130) (0xc000c2a000) Stream added, broadcasting: 5\nI0202 23:55:09.411237 2975 log.go:181] (0xc000a31130) Reply frame received for 5\nI0202 23:55:09.476180 2975 log.go:181] (0xc000a31130) Data frame received for 3\nI0202 23:55:09.476243 2975 log.go:181] (0xc000a31130) Data frame received for 5\nI0202 23:55:09.476262 2975 log.go:181] (0xc000c2a000) (5) Data frame handling\nI0202 23:55:09.476269 2975 log.go:181] (0xc000c2a000) (5) Data frame sent\nI0202 23:55:09.476274 2975 log.go:181] (0xc000a31130) Data frame received for 5\nI0202 23:55:09.476278 2975 log.go:181] (0xc000c2a000) (5) Data frame handling\nI0202 23:55:09.476287 2975 log.go:181] (0xc000b7a000) (3) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32238\nConnection to 172.18.0.13 32238 port [tcp/32238] succeeded!\nI0202 23:55:09.478011 2975 log.go:181] (0xc000a31130) Data frame received for 1\nI0202 23:55:09.478031 2975 log.go:181] (0xc000c2ab40) (1) Data frame handling\nI0202 23:55:09.478039 2975 log.go:181] (0xc000c2ab40) (1) Data frame sent\nI0202 23:55:09.478048 2975 log.go:181] (0xc000a31130) (0xc000c2ab40) Stream removed, broadcasting: 1\nI0202 23:55:09.478056 2975 log.go:181] (0xc000a31130) Go away received\nI0202 23:55:09.478361 2975 log.go:181] (0xc000a31130) (0xc000c2ab40) Stream removed, broadcasting: 1\nI0202 23:55:09.478374 2975 log.go:181] (0xc000a31130) (0xc000b7a000) Stream removed, broadcasting: 3\nI0202 23:55:09.478380 2975 log.go:181] (0xc000a31130) (0xc000c2a000) Stream removed, broadcasting: 5\n" Feb 2 23:55:09.484: INFO: stdout: "" Feb 2 23:55:09.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3218 exec execpod-affinity4wxfv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32238' Feb 2 23:55:09.677: INFO: stderr: "I0202 23:55:09.609394 2993 log.go:181] (0xc000944000) (0xc000c06000) Create stream\nI0202 23:55:09.609443 2993 log.go:181] (0xc000944000) (0xc000c06000) Stream added, broadcasting: 1\nI0202 23:55:09.611173 2993 log.go:181] (0xc000944000) Reply frame received for 1\nI0202 23:55:09.611206 2993 log.go:181] (0xc000944000) (0xc000c060a0) Create stream\nI0202 23:55:09.611223 2993 log.go:181] (0xc000944000) (0xc000c060a0) Stream added, broadcasting: 3\nI0202 23:55:09.611985 2993 log.go:181] (0xc000944000) Reply frame received for 3\nI0202 23:55:09.612006 2993 log.go:181] (0xc000944000) (0xc000c06140) Create stream\nI0202 23:55:09.612021 2993 log.go:181] (0xc000944000) (0xc000c06140) Stream added, broadcasting: 5\nI0202 23:55:09.612798 2993 log.go:181] (0xc000944000) Reply frame received for 5\nI0202 23:55:09.669019 2993 log.go:181] (0xc000944000) Data frame received for 5\nI0202 23:55:09.669046 2993 log.go:181] (0xc000c06140) (5) Data frame handling\nI0202 23:55:09.669065 2993 log.go:181] (0xc000c06140) (5) Data frame sent\nI0202 23:55:09.669073 2993 log.go:181] (0xc000944000) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.12 32238\nConnection to 172.18.0.12 32238 port [tcp/32238] succeeded!\nI0202 23:55:09.669079 2993 log.go:181] (0xc000c06140) (5) Data frame handling\nI0202 23:55:09.669377 2993 log.go:181] (0xc000944000) Data frame received for 3\nI0202 23:55:09.669404 2993 log.go:181] (0xc000c060a0) (3) Data frame handling\nI0202 23:55:09.670536 2993 log.go:181] (0xc000944000) Data frame received for 1\nI0202 23:55:09.670560 2993 log.go:181] (0xc000c06000) (1) Data frame handling\nI0202 23:55:09.670574 2993 log.go:181] (0xc000c06000) (1) Data frame sent\nI0202 23:55:09.670589 2993 log.go:181] (0xc000944000) (0xc000c06000) Stream removed, broadcasting: 1\nI0202 23:55:09.670604 2993 log.go:181] (0xc000944000) Go away received\nI0202 23:55:09.671023 2993 log.go:181] (0xc000944000) (0xc000c06000) Stream removed, broadcasting: 1\nI0202 23:55:09.671056 2993 log.go:181] (0xc000944000) (0xc000c060a0) Stream removed, broadcasting: 3\nI0202 23:55:09.671071 2993 log.go:181] (0xc000944000) (0xc000c06140) Stream removed, broadcasting: 5\n" Feb 2 23:55:09.677: INFO: stdout: "" Feb 2 23:55:09.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3218 exec execpod-affinity4wxfv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32238/ ; done' Feb 2 23:55:10.011: INFO: stderr: "I0202 23:55:09.842853 3011 log.go:181] (0xc001128fd0) (0xc0005043c0) Create stream\nI0202 23:55:09.842963 3011 log.go:181] (0xc001128fd0) (0xc0005043c0) Stream added, broadcasting: 1\nI0202 23:55:09.848943 3011 log.go:181] (0xc001128fd0) Reply frame received for 1\nI0202 23:55:09.848999 3011 log.go:181] (0xc001128fd0) (0xc000504be0) Create stream\nI0202 23:55:09.849016 3011 log.go:181] (0xc001128fd0) (0xc000504be0) Stream added, broadcasting: 3\nI0202 23:55:09.849888 3011 log.go:181] (0xc001128fd0) Reply frame received for 3\nI0202 23:55:09.849918 3011 log.go:181] (0xc001128fd0) (0xc000b98000) Create stream\nI0202 23:55:09.849928 3011 log.go:181] (0xc001128fd0) (0xc000b98000) Stream added, broadcasting: 5\nI0202 23:55:09.850798 3011 log.go:181] (0xc001128fd0) Reply frame received for 5\nI0202 23:55:09.904008 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.904063 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.904081 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.904104 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.904118 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.904149 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.910896 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.910921 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.910939 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.911600 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.911636 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.911651 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.911670 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.911679 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.911690 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.916289 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.916315 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.916338 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.917074 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.917118 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.917163 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.917193 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.917226 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.917255 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.924332 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.924370 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.924408 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.925228 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.925246 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.925256 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.925268 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.925291 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.925307 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.930138 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.930179 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.930219 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.931079 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.931106 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.931132 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.931143 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.931160 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.931170 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.936478 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.936519 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.936556 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.937415 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.937436 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.937449 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.937477 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.937509 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.937542 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.944442 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.944478 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.944537 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.945367 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.945386 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.945398 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.945433 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.945455 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.945464 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.951279 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.951302 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.951317 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.951891 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.951933 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.951945 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.951962 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.951972 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.951983 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.955264 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.955289 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.955307 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.956254 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.956283 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.956330 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.956352 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.956372 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.956382 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.959902 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.959939 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.959973 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.960108 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.960150 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.960181 3011 log.go:181] (0xc000b98000) (5) Data frame sent\nI0202 23:55:09.960224 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.960246 3011 log.go:181] (0xc000b98000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.960289 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.960334 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.960370 3011 log.go:181] (0xc000b98000) (5) Data frame sent\nI0202 23:55:09.960404 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.964272 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.964293 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.964304 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.964733 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.964762 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.964793 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.965066 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.965089 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.965108 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.967959 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.967988 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.968022 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.969173 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.969215 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.969232 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.969251 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.969260 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.969274 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.973846 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.973869 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.973887 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.974284 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.974315 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.974335 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0202 23:55:09.974355 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.974366 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.974377 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.974650 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.974681 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.974714 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n 2 http://172.18.0.13:32238/\nI0202 23:55:09.981655 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.981682 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.981696 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.982579 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.982611 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.982658 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.982684 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.982703 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.982718 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.986974 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.987005 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.987031 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.987437 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.987460 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.987485 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.987503 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.987513 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.987533 3011 log.go:181] (0xc000b98000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.993676 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.993714 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.993738 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.994051 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:09.994086 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:09.994101 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:09.994122 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.994133 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:09.994151 3011 log.go:181] (0xc000b98000) (5) Data frame sent\nI0202 23:55:09.994169 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:09.994181 3011 log.go:181] (0xc000b98000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:09.994210 3011 log.go:181] (0xc000b98000) (5) Data frame sent\nI0202 23:55:10.000542 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:10.000590 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:10.000613 3011 log.go:181] (0xc000504be0) (3) Data frame sent\nI0202 23:55:10.001940 3011 log.go:181] (0xc001128fd0) Data frame received for 5\nI0202 23:55:10.001968 3011 log.go:181] (0xc000b98000) (5) Data frame handling\nI0202 23:55:10.002701 3011 log.go:181] (0xc001128fd0) Data frame received for 3\nI0202 23:55:10.002723 3011 log.go:181] (0xc000504be0) (3) Data frame handling\nI0202 23:55:10.004525 3011 log.go:181] (0xc001128fd0) Data frame received for 1\nI0202 23:55:10.004562 3011 log.go:181] (0xc0005043c0) (1) Data frame handling\nI0202 23:55:10.004586 3011 log.go:181] (0xc0005043c0) (1) Data frame sent\nI0202 23:55:10.004615 3011 log.go:181] (0xc001128fd0) (0xc0005043c0) Stream removed, broadcasting: 1\nI0202 23:55:10.004649 3011 log.go:181] (0xc001128fd0) Go away received\nI0202 23:55:10.005288 3011 log.go:181] (0xc001128fd0) (0xc0005043c0) Stream removed, broadcasting: 1\nI0202 23:55:10.005319 3011 log.go:181] (0xc001128fd0) (0xc000504be0) Stream removed, broadcasting: 3\nI0202 23:55:10.005337 3011 log.go:181] (0xc001128fd0) (0xc000b98000) Stream removed, broadcasting: 5\n" Feb 2 23:55:10.012: INFO: stdout: "\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-j5c5r\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-j5c5r\naffinity-nodeport-transition-j5c5r\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-x6p7d\naffinity-nodeport-transition-j5c5r\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-x6p7d\naffinity-nodeport-transition-j5c5r\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-j5c5r\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-x6p7d\naffinity-nodeport-transition-mz7l6" Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-j5c5r Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-j5c5r Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-j5c5r Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-x6p7d Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-j5c5r Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-x6p7d Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-j5c5r Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-j5c5r Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-x6p7d Feb 2 23:55:10.012: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3218 exec execpod-affinity4wxfv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:32238/ ; done' Feb 2 23:55:10.329: INFO: stderr: "I0202 23:55:10.160568 3029 log.go:181] (0xc000276000) (0xc000a8a000) Create stream\nI0202 23:55:10.160625 3029 log.go:181] (0xc000276000) (0xc000a8a000) Stream added, broadcasting: 1\nI0202 23:55:10.165750 3029 log.go:181] (0xc000276000) Reply frame received for 1\nI0202 23:55:10.165806 3029 log.go:181] (0xc000276000) (0xc000a8a0a0) Create stream\nI0202 23:55:10.165822 3029 log.go:181] (0xc000276000) (0xc000a8a0a0) Stream added, broadcasting: 3\nI0202 23:55:10.166738 3029 log.go:181] (0xc000276000) Reply frame received for 3\nI0202 23:55:10.166799 3029 log.go:181] (0xc000276000) (0xc0006ad040) Create stream\nI0202 23:55:10.166814 3029 log.go:181] (0xc000276000) (0xc0006ad040) Stream added, broadcasting: 5\nI0202 23:55:10.167775 3029 log.go:181] (0xc000276000) Reply frame received for 5\nI0202 23:55:10.227851 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.227873 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.227881 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.227898 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.227903 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.227909 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.233204 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.233228 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.233240 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.234019 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.234038 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.234048 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.234084 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.234126 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.234155 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.239474 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.239484 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.239497 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.240323 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.240337 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.240348 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.240372 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.240398 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.240415 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.246902 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.246941 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.246972 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.247221 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.247237 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.247249 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.247371 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.247393 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.247402 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.251014 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.251037 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.251055 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.251681 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.251716 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.251727 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.251743 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.251751 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.251760 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.256810 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.256825 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.256967 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.257476 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.257495 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.257501 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.257533 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.257577 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.257619 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.261899 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.261915 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.261922 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.261991 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.262011 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.262024 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\nI0202 23:55:10.262039 3029 log.go:181] (0xc000276000) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.262066 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.262091 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.265895 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.265913 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.265921 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.266313 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.266328 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.266336 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.266346 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.266352 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.266358 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.271993 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.272018 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.272036 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.272705 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.272730 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.272739 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.272748 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.272755 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.272763 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.277261 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.277273 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.277280 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.277885 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.277895 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.277901 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.277991 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.278011 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.278035 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.281640 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.281653 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.281665 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.282411 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.282427 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.282439 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.282449 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.282459 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.282465 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.287656 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.287676 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.287685 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.288421 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.288435 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.288443 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.288468 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.288492 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.288512 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.293933 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.293958 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.293978 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.294913 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.294928 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.294937 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.294946 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.294952 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.294964 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.299378 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.299408 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.299423 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.299958 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.299977 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.300002 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.300016 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.300032 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.300045 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.306324 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.306341 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.306351 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.306764 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.306799 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.306812 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.306834 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.306842 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.306850 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.314010 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.314035 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.314056 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.314661 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.314684 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.314709 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.314729 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.314744 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.314759 3029 log.go:181] (0xc0006ad040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:32238/\nI0202 23:55:10.319759 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.319795 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.319818 3029 log.go:181] (0xc000a8a0a0) (3) Data frame sent\nI0202 23:55:10.321639 3029 log.go:181] (0xc000276000) Data frame received for 3\nI0202 23:55:10.321661 3029 log.go:181] (0xc000a8a0a0) (3) Data frame handling\nI0202 23:55:10.321684 3029 log.go:181] (0xc000276000) Data frame received for 5\nI0202 23:55:10.321694 3029 log.go:181] (0xc0006ad040) (5) Data frame handling\nI0202 23:55:10.323472 3029 log.go:181] (0xc000276000) Data frame received for 1\nI0202 23:55:10.323500 3029 log.go:181] (0xc000a8a000) (1) Data frame handling\nI0202 23:55:10.323517 3029 log.go:181] (0xc000a8a000) (1) Data frame sent\nI0202 23:55:10.323530 3029 log.go:181] (0xc000276000) (0xc000a8a000) Stream removed, broadcasting: 1\nI0202 23:55:10.323748 3029 log.go:181] (0xc000276000) Go away received\nI0202 23:55:10.324067 3029 log.go:181] (0xc000276000) (0xc000a8a000) Stream removed, broadcasting: 1\nI0202 23:55:10.324102 3029 log.go:181] (0xc000276000) (0xc000a8a0a0) Stream removed, broadcasting: 3\nI0202 23:55:10.324121 3029 log.go:181] (0xc000276000) (0xc0006ad040) Stream removed, broadcasting: 5\n" Feb 2 23:55:10.330: INFO: stdout: "\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6\naffinity-nodeport-transition-mz7l6" Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Received response from host: affinity-nodeport-transition-mz7l6 Feb 2 23:55:10.330: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3218, will wait for the garbage collector to delete the pods Feb 2 23:55:10.631: INFO: Deleting ReplicationController affinity-nodeport-transition took: 202.807058ms Feb 2 23:55:11.231: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.187487ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:56:09.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3218" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:72.681 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":203,"skipped":3430,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:56:09.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:56:26.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6468" for this suite. • [SLOW TEST:16.248 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":309,"completed":204,"skipped":3436,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:56:26.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-317eeec3-7b3b-46dc-b097-489fbe13d93e STEP: Creating a pod to test consume secrets Feb 2 23:56:26.237: INFO: Waiting up to 5m0s for pod "pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1" in namespace "secrets-5913" to be "Succeeded or Failed" Feb 2 23:56:26.293: INFO: Pod "pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 55.50504ms Feb 2 23:56:28.331: INFO: Pod "pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0935825s Feb 2 23:56:30.394: INFO: Pod "pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156326257s STEP: Saw pod success Feb 2 23:56:30.394: INFO: Pod "pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1" satisfied condition "Succeeded or Failed" Feb 2 23:56:30.396: INFO: Trying to get logs from node leguer-worker pod pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1 container secret-volume-test: STEP: delete the pod Feb 2 23:56:30.462: INFO: Waiting for pod pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1 to disappear Feb 2 23:56:30.475: INFO: Pod pod-secrets-7eb1d807-af5f-4432-8f46-b649e48b4cc1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:56:30.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5913" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":205,"skipped":3445,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:56:30.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:56:30.648: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:56:32.652: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Pending, waiting for it to be Running (with Ready = true) Feb 2 23:56:34.651: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:36.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:38.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:40.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:42.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:44.652: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:46.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:48.652: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:50.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = false) Feb 2 23:56:52.653: INFO: The status of Pod test-webserver-b0fdd235-5e13-41ed-a6c2-e04743f9bd29 is Running (Ready = true) Feb 2 23:56:52.656: INFO: Container started at 2021-02-02 23:56:33 +0000 UTC, pod became ready at 2021-02-02 23:56:51 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:56:52.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2032" for this suite. • [SLOW TEST:22.182 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":309,"completed":206,"skipped":3453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:56:52.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token Feb 2 23:56:53.305: INFO: created pod pod-service-account-defaultsa Feb 2 23:56:53.305: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 2 23:56:53.346: INFO: created pod pod-service-account-mountsa Feb 2 23:56:53.346: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 2 23:56:53.350: INFO: created pod pod-service-account-nomountsa Feb 2 23:56:53.350: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 2 23:56:53.371: INFO: created pod pod-service-account-defaultsa-mountspec Feb 2 23:56:53.371: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 2 23:56:53.395: INFO: created pod pod-service-account-mountsa-mountspec Feb 2 23:56:53.395: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 2 23:56:53.440: INFO: created pod pod-service-account-nomountsa-mountspec Feb 2 23:56:53.440: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 2 23:56:53.502: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 2 23:56:53.502: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 2 23:56:53.522: INFO: created pod pod-service-account-mountsa-nomountspec Feb 2 23:56:53.522: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 2 23:56:53.558: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 2 23:56:53.558: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:56:53.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7651" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":309,"completed":207,"skipped":3486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:56:53.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-156 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating statefulset ss in namespace statefulset-156 Feb 2 23:56:53.945: INFO: Found 0 stateful pods, waiting for 1 Feb 2 23:57:04.037: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 2 23:57:13.950: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 2 23:57:13.979: INFO: Deleting all statefulset in ns statefulset-156 Feb 2 23:57:13.991: INFO: Scaling statefulset ss to 0 Feb 2 23:58:14.101: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 23:58:14.104: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:58:14.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-156" for this suite. • [SLOW TEST:80.461 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":309,"completed":208,"skipped":3513,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:58:14.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:58:18.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1592" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":209,"skipped":3515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:58:18.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4597 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4597 STEP: creating replication controller externalsvc in namespace services-4597 I0202 23:58:18.478667 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4597, replica count: 2 I0202 23:58:21.529156 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 23:58:24.529554 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 2 23:58:24.579: INFO: Creating new exec pod Feb 2 23:58:28.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4597 exec execpod97lr7 -- /bin/sh -x -c nslookup nodeport-service.services-4597.svc.cluster.local' Feb 2 23:58:28.861: INFO: stderr: "I0202 23:58:28.753274 3044 log.go:181] (0xc000220000) (0xc000922780) Create stream\nI0202 23:58:28.753330 3044 log.go:181] (0xc000220000) (0xc000922780) Stream added, broadcasting: 1\nI0202 23:58:28.757397 3044 log.go:181] (0xc000220000) Reply frame received for 1\nI0202 23:58:28.757424 3044 log.go:181] (0xc000220000) (0xc000950000) Create stream\nI0202 23:58:28.757431 3044 log.go:181] (0xc000220000) (0xc000950000) Stream added, broadcasting: 3\nI0202 23:58:28.758361 3044 log.go:181] (0xc000220000) Reply frame received for 3\nI0202 23:58:28.758401 3044 log.go:181] (0xc000220000) (0xc000a903c0) Create stream\nI0202 23:58:28.758414 3044 log.go:181] (0xc000220000) (0xc000a903c0) Stream added, broadcasting: 5\nI0202 23:58:28.759445 3044 log.go:181] (0xc000220000) Reply frame received for 5\nI0202 23:58:28.842660 3044 log.go:181] (0xc000220000) Data frame received for 5\nI0202 23:58:28.842685 3044 log.go:181] (0xc000a903c0) (5) Data frame handling\nI0202 23:58:28.842699 3044 log.go:181] (0xc000a903c0) (5) Data frame sent\n+ nslookup nodeport-service.services-4597.svc.cluster.local\nI0202 23:58:28.851876 3044 log.go:181] (0xc000220000) Data frame received for 3\nI0202 23:58:28.851897 3044 log.go:181] (0xc000950000) (3) Data frame handling\nI0202 23:58:28.851913 3044 log.go:181] (0xc000950000) (3) Data frame sent\nI0202 23:58:28.852801 3044 log.go:181] (0xc000220000) Data frame received for 3\nI0202 23:58:28.852822 3044 log.go:181] (0xc000950000) (3) Data frame handling\nI0202 23:58:28.852938 3044 log.go:181] (0xc000950000) (3) Data frame sent\nI0202 23:58:28.853549 3044 log.go:181] (0xc000220000) Data frame received for 3\nI0202 23:58:28.853571 3044 log.go:181] (0xc000950000) (3) Data frame handling\nI0202 23:58:28.853596 3044 log.go:181] (0xc000220000) Data frame received for 5\nI0202 23:58:28.853615 3044 log.go:181] (0xc000a903c0) (5) Data frame handling\nI0202 23:58:28.855497 3044 log.go:181] (0xc000220000) Data frame received for 1\nI0202 23:58:28.855524 3044 log.go:181] (0xc000922780) (1) Data frame handling\nI0202 23:58:28.855539 3044 log.go:181] (0xc000922780) (1) Data frame sent\nI0202 23:58:28.855559 3044 log.go:181] (0xc000220000) (0xc000922780) Stream removed, broadcasting: 1\nI0202 23:58:28.855572 3044 log.go:181] (0xc000220000) Go away received\nI0202 23:58:28.855969 3044 log.go:181] (0xc000220000) (0xc000922780) Stream removed, broadcasting: 1\nI0202 23:58:28.856002 3044 log.go:181] (0xc000220000) (0xc000950000) Stream removed, broadcasting: 3\nI0202 23:58:28.856023 3044 log.go:181] (0xc000220000) (0xc000a903c0) Stream removed, broadcasting: 5\n" Feb 2 23:58:28.861: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4597.svc.cluster.local\tcanonical name = externalsvc.services-4597.svc.cluster.local.\nName:\texternalsvc.services-4597.svc.cluster.local\nAddress: 10.96.228.65\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4597, will wait for the garbage collector to delete the pods Feb 2 23:58:28.922: INFO: Deleting ReplicationController externalsvc took: 7.301008ms Feb 2 23:58:29.522: INFO: Terminating ReplicationController externalsvc pods took: 600.224862ms Feb 2 23:59:30.257: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:59:30.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4597" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:72.053 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":309,"completed":210,"skipped":3545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:59:30.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 2 23:59:30.419: INFO: Waiting up to 5m0s for pod "pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2" in namespace "emptydir-8298" to be "Succeeded or Failed" Feb 2 23:59:30.426: INFO: Pod "pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174076ms Feb 2 23:59:32.430: INFO: Pod "pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010420109s Feb 2 23:59:34.434: INFO: Pod "pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2": Phase="Running", Reason="", readiness=true. Elapsed: 4.014890007s Feb 2 23:59:36.439: INFO: Pod "pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019671354s STEP: Saw pod success Feb 2 23:59:36.439: INFO: Pod "pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2" satisfied condition "Succeeded or Failed" Feb 2 23:59:36.442: INFO: Trying to get logs from node leguer-worker pod pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2 container test-container: STEP: delete the pod Feb 2 23:59:36.486: INFO: Waiting for pod pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2 to disappear Feb 2 23:59:36.508: INFO: Pod pod-e0566fc9-0c8a-429f-964e-7cea7727b1e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:59:36.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8298" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":211,"skipped":3700,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:59:36.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 2 23:59:36.568: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:59:36.586: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:59:36.590: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Feb 2 23:59:36.601: INFO: rally-0a12c122-7dnmol6z-vwbwf from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-0a12c122-fagfvvpw-sskvj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:54 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-0a12c122-iqj2mcat-2hfpj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-0a12c122-iqj2mcat-swp7f from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:59:36.601: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:59:36.601: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container chaos-mesh ready: true, restart count 0 Feb 2 23:59:36.601: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:59:36.601: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:59:36.601: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.601: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:59:36.601: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Feb 2 23:59:36.607: INFO: rally-0a12c122-4xacdhsf-44v5r from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-0a12c122-4xacdhsf-5c974 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-0a12c122-7dnmol6z-n9ztn from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-0a12c122-fagfvvpw-cxsgt from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:53 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-0a12c122-lqiac6cu-6fsz6 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-0a12c122-lqiac6cu-99jsp from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:59:36.607: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 2 23:59:36.607: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container chaos-daemon ready: true, restart count 0 Feb 2 23:59:36.607: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:59:36.607: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Feb 2 23:59:36.607: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-68e036c9-b8c0-4dac-a2b7-e406181f5c8d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-68e036c9-b8c0-4dac-a2b7-e406181f5c8d off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-68e036c9-b8c0-4dac-a2b7-e406181f5c8d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 2 23:59:44.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5227" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:8.331 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":309,"completed":212,"skipped":3711,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 2 23:59:44.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 2 23:59:44.965: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 2 23:59:49.976: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 2 23:59:49.976: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 2 23:59:51.980: INFO: Creating deployment "test-rollover-deployment" Feb 2 23:59:51.988: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 2 23:59:53.995: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 2 23:59:54.002: INFO: Ensure that both replica sets have 1 created replica Feb 2 23:59:54.008: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 2 23:59:54.017: INFO: Updating deployment test-rollover-deployment Feb 2 23:59:54.017: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 2 23:59:56.078: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 2 23:59:56.084: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 2 23:59:56.089: INFO: all replica sets need to contain the pod-template-hash label Feb 2 23:59:56.089: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907194, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 23:59:58.098: INFO: all replica sets need to contain the pod-template-hash label Feb 2 23:59:58.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907197, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:00:00.097: INFO: all replica sets need to contain the pod-template-hash label Feb 3 00:00:00.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907197, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:00:02.097: INFO: all replica sets need to contain the pod-template-hash label Feb 3 00:00:02.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907197, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:00:04.098: INFO: all replica sets need to contain the pod-template-hash label Feb 3 00:00:04.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907197, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:00:06.097: INFO: all replica sets need to contain the pod-template-hash label Feb 3 00:00:06.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907197, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747907192, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:00:08.315: INFO: Feb 3 00:00:08.315: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 3 00:00:08.429: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4564 7c16dbaa-7b22-4f0f-a856-eba3ddebef38 4189541 2 2021-02-02 23:59:51 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-02 23:59:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-03 00:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006821a08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-02 23:59:52 +0000 UTC,LastTransitionTime:2021-02-02 23:59:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-02-03 00:00:07 +0000 UTC,LastTransitionTime:2021-02-02 23:59:52 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 3 00:00:08.432: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-4564 98499470-9b65-44d3-a6af-ecc0e0a324be 4189530 2 2021-02-02 23:59:54 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7c16dbaa-7b22-4f0f-a856-eba3ddebef38 0xc006821e77 0xc006821e78}] [] [{kube-controller-manager Update apps/v1 2021-02-03 00:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c16dbaa-7b22-4f0f-a856-eba3ddebef38\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006821f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 00:00:08.432: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 3 00:00:08.432: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4564 ca442d05-7f60-4d2e-950d-56503111d54c 4189540 2 2021-02-02 23:59:44 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7c16dbaa-7b22-4f0f-a856-eba3ddebef38 0xc006821d67 0xc006821d68}] [] [{e2e.test Update apps/v1 2021-02-02 23:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-03 00:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c16dbaa-7b22-4f0f-a856-eba3ddebef38\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006821e08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 00:00:08.432: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-4564 2db2c199-4a83-4f91-92b7-9f046a30f777 4189497 2 2021-02-02 23:59:51 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7c16dbaa-7b22-4f0f-a856-eba3ddebef38 0xc006821f77 0xc006821f78}] [] [{kube-controller-manager Update apps/v1 2021-02-02 23:59:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c16dbaa-7b22-4f0f-a856-eba3ddebef38\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003e2c008 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 00:00:08.435: INFO: Pod "test-rollover-deployment-668db69979-tv7ht" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-tv7ht test-rollover-deployment-668db69979- deployment-4564 b53874c1-1671-425b-a6ef-66b22a1488e0 4189508 0 2021-02-02 23:59:54 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 98499470-9b65-44d3-a6af-ecc0e0a324be 0xc003e2c507 0xc003e2c508}] [] [{kube-controller-manager Update v1 2021-02-02 23:59:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98499470-9b65-44d3-a6af-ecc0e0a324be\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-02 23:59:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.151\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ldxhh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ldxhh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ldxhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:59:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:59:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:59:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-02 23:59:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.151,StartTime:2021-02-02 23:59:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-02 23:59:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://cd1ec0c8109235afa970af89b8fcad74e4d883bfef04838820d988e768aa08fc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.151,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:00:08.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4564" for this suite. • [SLOW TEST:23.594 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":309,"completed":213,"skipped":3719,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:00:08.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:00:08.598: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"140b6559-b63d-4d77-9f22-0c0bb6342f9e", Controller:(*bool)(0xc002562672), BlockOwnerDeletion:(*bool)(0xc002562673)}} Feb 3 00:00:08.852: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"98578816-b7e2-4534-894f-16f0fda2c29c", Controller:(*bool)(0xc003e2dbf2), BlockOwnerDeletion:(*bool)(0xc003e2dbf3)}} Feb 3 00:00:08.860: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5831c0cc-f566-4d43-b3fa-01b5f88d82c3", Controller:(*bool)(0xc0025629ba), BlockOwnerDeletion:(*bool)(0xc0025629bb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:00:13.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5355" for this suite. • [SLOW TEST:5.548 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":309,"completed":214,"skipped":3729,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:00:13.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-02749ee6-e686-4736-8b32-22692ef0fc30 STEP: Creating a pod to test consume configMaps Feb 3 00:00:14.433: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e" in namespace "projected-4598" to be "Succeeded or Failed" Feb 3 00:00:14.501: INFO: Pod "pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 67.580086ms Feb 3 00:00:16.506: INFO: Pod "pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072345786s Feb 3 00:00:18.539: INFO: Pod "pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105410163s STEP: Saw pod success Feb 3 00:00:18.539: INFO: Pod "pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e" satisfied condition "Succeeded or Failed" Feb 3 00:00:18.542: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e container agnhost-container: STEP: delete the pod Feb 3 00:00:18.593: INFO: Waiting for pod pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e to disappear Feb 3 00:00:18.607: INFO: Pod pod-projected-configmaps-3a91192c-85cc-4bd2-a2d8-695829c61c4e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:00:18.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4598" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":215,"skipped":3745,"failed":0} ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:00:18.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-24651626-a48b-4729-bba5-164001394b39 in namespace container-probe-5016 Feb 3 00:00:22.934: INFO: Started pod liveness-24651626-a48b-4729-bba5-164001394b39 in namespace container-probe-5016 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 00:00:22.938: INFO: Initial restart count of pod liveness-24651626-a48b-4729-bba5-164001394b39 is 0 Feb 3 00:00:42.986: INFO: Restart count of pod container-probe-5016/liveness-24651626-a48b-4729-bba5-164001394b39 is now 1 (20.04871554s elapsed) Feb 3 00:01:03.029: INFO: Restart count of pod container-probe-5016/liveness-24651626-a48b-4729-bba5-164001394b39 is now 2 (40.091201281s elapsed) Feb 3 00:01:23.117: INFO: Restart count of pod container-probe-5016/liveness-24651626-a48b-4729-bba5-164001394b39 is now 3 (1m0.179314846s elapsed) Feb 3 00:01:43.237: INFO: Restart count of pod container-probe-5016/liveness-24651626-a48b-4729-bba5-164001394b39 is now 4 (1m20.299506193s elapsed) Feb 3 00:02:55.491: INFO: Restart count of pod container-probe-5016/liveness-24651626-a48b-4729-bba5-164001394b39 is now 5 (2m32.553254056s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:02:55.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5016" for this suite. • [SLOW TEST:156.946 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":309,"completed":216,"skipped":3745,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:02:55.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Feb 3 00:02:55.633: INFO: namespace kubectl-9347 Feb 3 00:02:55.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9347 create -f -' Feb 3 00:02:56.201: INFO: stderr: "" Feb 3 00:02:56.201: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Feb 3 00:02:57.206: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 00:02:57.206: INFO: Found 0 / 1 Feb 3 00:02:58.206: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 00:02:58.206: INFO: Found 0 / 1 Feb 3 00:02:59.206: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 00:02:59.206: INFO: Found 1 / 1 Feb 3 00:02:59.206: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 00:02:59.209: INFO: Selector matched 1 pods for map[app:agnhost] Feb 3 00:02:59.209: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 3 00:02:59.209: INFO: wait on agnhost-primary startup in kubectl-9347 Feb 3 00:02:59.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9347 logs agnhost-primary-2swhw agnhost-primary' Feb 3 00:02:59.333: INFO: stderr: "" Feb 3 00:02:59.333: INFO: stdout: "Paused\n" STEP: exposing RC Feb 3 00:02:59.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9347 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Feb 3 00:02:59.479: INFO: stderr: "" Feb 3 00:02:59.479: INFO: stdout: "service/rm2 exposed\n" Feb 3 00:02:59.484: INFO: Service rm2 in namespace kubectl-9347 found. STEP: exposing service Feb 3 00:03:01.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9347 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Feb 3 00:03:01.643: INFO: stderr: "" Feb 3 00:03:01.643: INFO: stdout: "service/rm3 exposed\n" Feb 3 00:03:01.710: INFO: Service rm3 in namespace kubectl-9347 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:03:03.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9347" for this suite. • [SLOW TEST:8.166 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":309,"completed":217,"skipped":3755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:03:03.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pod templates Feb 3 00:03:03.838: INFO: created test-podtemplate-1 Feb 3 00:03:03.844: INFO: created test-podtemplate-2 Feb 3 00:03:03.850: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Feb 3 00:03:03.872: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Feb 3 00:03:03.917: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:03:03.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9384" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":309,"completed":218,"skipped":3814,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:03:03.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-cc580599-26d2-4bdb-8dbe-6292aea3d1fe STEP: Creating a pod to test consume configMaps Feb 3 00:03:04.049: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff" in namespace "projected-6570" to be "Succeeded or Failed" Feb 3 00:03:04.070: INFO: Pod "pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.887583ms Feb 3 00:03:06.074: INFO: Pod "pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024979846s Feb 3 00:03:08.085: INFO: Pod "pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036732373s STEP: Saw pod success Feb 3 00:03:08.085: INFO: Pod "pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff" satisfied condition "Succeeded or Failed" Feb 3 00:03:08.088: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff container projected-configmap-volume-test: STEP: delete the pod Feb 3 00:03:08.274: INFO: Waiting for pod pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff to disappear Feb 3 00:03:08.303: INFO: Pod pod-projected-configmaps-cd3e0874-04c6-4418-be93-e9db692842ff no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:03:08.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6570" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":219,"skipped":3822,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:03:08.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 3 00:03:08.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648" in namespace "downward-api-7624" to be "Succeeded or Failed" Feb 3 00:03:08.514: INFO: Pod "downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648": Phase="Pending", Reason="", readiness=false. Elapsed: 23.531553ms Feb 3 00:03:10.576: INFO: Pod "downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085905181s Feb 3 00:03:12.582: INFO: Pod "downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648": Phase="Running", Reason="", readiness=true. Elapsed: 4.091130042s Feb 3 00:03:14.587: INFO: Pod "downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096702782s STEP: Saw pod success Feb 3 00:03:14.587: INFO: Pod "downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648" satisfied condition "Succeeded or Failed" Feb 3 00:03:14.590: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648 container client-container: STEP: delete the pod Feb 3 00:03:14.635: INFO: Waiting for pod downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648 to disappear Feb 3 00:03:14.659: INFO: Pod downwardapi-volume-8caed32e-b6a9-40a5-9114-76f658256648 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:03:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7624" for this suite. • [SLOW TEST:6.374 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":309,"completed":220,"skipped":3842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:03:14.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:03:14.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4566" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":309,"completed":221,"skipped":3873,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:03:14.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 3 00:03:14.939: INFO: Waiting up to 5m0s for pod "pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec" in namespace "emptydir-3238" to be "Succeeded or Failed" Feb 3 00:03:14.963: INFO: Pod "pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec": Phase="Pending", Reason="", readiness=false. Elapsed: 24.434309ms Feb 3 00:03:16.968: INFO: Pod "pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029133594s Feb 3 00:03:19.014: INFO: Pod "pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074954271s STEP: Saw pod success Feb 3 00:03:19.014: INFO: Pod "pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec" satisfied condition "Succeeded or Failed" Feb 3 00:03:19.017: INFO: Trying to get logs from node leguer-worker2 pod pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec container test-container: STEP: delete the pod Feb 3 00:03:19.036: INFO: Waiting for pod pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec to disappear Feb 3 00:03:19.039: INFO: Pod pod-7220fa92-2f81-41b7-859e-b7aa8bbd8cec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:03:19.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3238" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":222,"skipped":3881,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:03:19.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with configMap that has name projected-configmap-test-upd-391dab28-dd59-4337-b31b-39da1ef223ed STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-391dab28-dd59-4337-b31b-39da1ef223ed STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:04:31.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8235" for this suite. • [SLOW TEST:72.612 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":223,"skipped":3895,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:04:31.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 3 00:04:31.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5" in namespace "projected-2906" to be "Succeeded or Failed" Feb 3 00:04:31.825: INFO: Pod "downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.356868ms Feb 3 00:04:33.830: INFO: Pod "downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008413713s Feb 3 00:04:35.836: INFO: Pod "downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013703936s STEP: Saw pod success Feb 3 00:04:35.836: INFO: Pod "downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5" satisfied condition "Succeeded or Failed" Feb 3 00:04:35.839: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5 container client-container: STEP: delete the pod Feb 3 00:04:35.873: INFO: Waiting for pod downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5 to disappear Feb 3 00:04:35.882: INFO: Pod downwardapi-volume-0fa1199c-b634-4856-86d9-7fb12e782fd5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:04:35.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2906" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":224,"skipped":3895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:04:35.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Feb 3 00:04:36.018: INFO: starting watch STEP: patching STEP: updating Feb 3 00:04:36.032: INFO: waiting for watch events with expected annotations Feb 3 00:04:36.032: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:04:36.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-446" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":309,"completed":225,"skipped":3938,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:04:36.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Feb 3 00:04:36.151: INFO: Waiting up to 5m0s for pod "downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13" in namespace "downward-api-6023" to be "Succeeded or Failed" Feb 3 00:04:36.206: INFO: Pod "downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13": Phase="Pending", Reason="", readiness=false. Elapsed: 54.484621ms Feb 3 00:04:38.214: INFO: Pod "downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063102298s Feb 3 00:04:40.219: INFO: Pod "downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068078265s STEP: Saw pod success Feb 3 00:04:40.220: INFO: Pod "downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13" satisfied condition "Succeeded or Failed" Feb 3 00:04:40.223: INFO: Trying to get logs from node leguer-worker2 pod downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13 container dapi-container: STEP: delete the pod Feb 3 00:04:40.290: INFO: Waiting for pod downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13 to disappear Feb 3 00:04:40.294: INFO: Pod downward-api-055ccfd8-99e9-4b81-873c-054c949f9b13 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:04:40.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6023" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":309,"completed":226,"skipped":3957,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:04:40.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-403c667d-2ce2-4c75-b520-2ea81197b763 STEP: Creating configMap with name cm-test-opt-upd-f653851b-4aab-4bc8-af6d-5c34be8ad07b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-403c667d-2ce2-4c75-b520-2ea81197b763 STEP: Updating configmap cm-test-opt-upd-f653851b-4aab-4bc8-af6d-5c34be8ad07b STEP: Creating configMap with name cm-test-opt-create-2aafa811-09e6-44c5-a291-4686b553f5e5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:04:50.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7624" for this suite. • [SLOW TEST:10.603 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":227,"skipped":3964,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:04:50.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-downwardapi-wz6s STEP: Creating a pod to test atomic-volume-subpath Feb 3 00:04:51.004: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wz6s" in namespace "subpath-8486" to be "Succeeded or Failed" Feb 3 00:04:51.035: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Pending", Reason="", readiness=false. Elapsed: 30.644847ms Feb 3 00:04:53.039: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034930691s Feb 3 00:04:55.043: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 4.038971924s Feb 3 00:04:57.047: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 6.043133108s Feb 3 00:04:59.066: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 8.061876031s Feb 3 00:05:01.146: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 10.142020025s Feb 3 00:05:03.150: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 12.145816329s Feb 3 00:05:05.154: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 14.149672215s Feb 3 00:05:07.159: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 16.154882185s Feb 3 00:05:09.162: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 18.158170698s Feb 3 00:05:11.166: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 20.161512542s Feb 3 00:05:13.171: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 22.167023455s Feb 3 00:05:15.177: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Running", Reason="", readiness=true. Elapsed: 24.172467447s Feb 3 00:05:17.182: INFO: Pod "pod-subpath-test-downwardapi-wz6s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.177422598s STEP: Saw pod success Feb 3 00:05:17.182: INFO: Pod "pod-subpath-test-downwardapi-wz6s" satisfied condition "Succeeded or Failed" Feb 3 00:05:17.185: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-downwardapi-wz6s container test-container-subpath-downwardapi-wz6s: STEP: delete the pod Feb 3 00:05:17.214: INFO: Waiting for pod pod-subpath-test-downwardapi-wz6s to disappear Feb 3 00:05:17.216: INFO: Pod pod-subpath-test-downwardapi-wz6s no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-wz6s Feb 3 00:05:17.216: INFO: Deleting pod "pod-subpath-test-downwardapi-wz6s" in namespace "subpath-8486" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:05:17.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8486" for this suite. • [SLOW TEST:26.337 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":309,"completed":228,"skipped":3971,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:05:17.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 3 00:05:17.301: INFO: Waiting up to 5m0s for pod "pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33" in namespace "emptydir-7242" to be "Succeeded or Failed" Feb 3 00:05:17.326: INFO: Pod "pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33": Phase="Pending", Reason="", readiness=false. Elapsed: 25.174218ms Feb 3 00:05:19.331: INFO: Pod "pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030411942s Feb 3 00:05:21.335: INFO: Pod "pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33": Phase="Running", Reason="", readiness=true. Elapsed: 4.034514352s Feb 3 00:05:23.340: INFO: Pod "pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039700647s STEP: Saw pod success Feb 3 00:05:23.340: INFO: Pod "pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33" satisfied condition "Succeeded or Failed" Feb 3 00:05:23.343: INFO: Trying to get logs from node leguer-worker pod pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33 container test-container: STEP: delete the pod Feb 3 00:05:23.383: INFO: Waiting for pod pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33 to disappear Feb 3 00:05:23.393: INFO: Pod pod-f7817b12-9bb9-4e42-bcc6-9204170d3f33 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:05:23.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7242" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":229,"skipped":3983,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:05:23.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:05:34.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5788" for this suite. • [SLOW TEST:11.162 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":309,"completed":230,"skipped":3985,"failed":0} [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:05:34.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-projected-all-test-volume-15d6fe85-4b55-4274-b519-f931607910b4 STEP: Creating secret with name secret-projected-all-test-volume-fd19c266-5dd5-477b-b2ea-961a37f0768a STEP: Creating a pod to test Check all projections for projected volume plugin Feb 3 00:05:34.739: INFO: Waiting up to 5m0s for pod "projected-volume-f934b103-140f-4f55-adec-ac20f248037e" in namespace "projected-8988" to be "Succeeded or Failed" Feb 3 00:05:34.779: INFO: Pod "projected-volume-f934b103-140f-4f55-adec-ac20f248037e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.848025ms Feb 3 00:05:36.783: INFO: Pod "projected-volume-f934b103-140f-4f55-adec-ac20f248037e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043873078s Feb 3 00:05:38.787: INFO: Pod "projected-volume-f934b103-140f-4f55-adec-ac20f248037e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047854379s STEP: Saw pod success Feb 3 00:05:38.787: INFO: Pod "projected-volume-f934b103-140f-4f55-adec-ac20f248037e" satisfied condition "Succeeded or Failed" Feb 3 00:05:38.790: INFO: Trying to get logs from node leguer-worker pod projected-volume-f934b103-140f-4f55-adec-ac20f248037e container projected-all-volume-test: STEP: delete the pod Feb 3 00:05:38.816: INFO: Waiting for pod projected-volume-f934b103-140f-4f55-adec-ac20f248037e to disappear Feb 3 00:05:38.895: INFO: Pod projected-volume-f934b103-140f-4f55-adec-ac20f248037e no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:05:38.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8988" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":309,"completed":231,"skipped":3985,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:05:38.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 3 00:05:47.006: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:47.029: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:05:49.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:49.034: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:05:51.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:51.034: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:05:53.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:53.034: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:05:55.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:55.034: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:05:57.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:57.034: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:05:59.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:05:59.033: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 00:06:01.029: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 00:06:01.034: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:01.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5833" for this suite. • [SLOW TEST:22.143 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":309,"completed":232,"skipped":3985,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:01.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:14.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4316" for this suite. • [SLOW TEST:13.236 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":309,"completed":233,"skipped":3997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:14.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:14.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2780" for this suite. STEP: Destroying namespace "nspatchtest-5370fdb4-ef65-48f7-a3f0-8bf4266ddf12-8865" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":309,"completed":234,"skipped":4030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:14.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 00:06:17.628: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3817" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":235,"skipped":4094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:17.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name projected-secret-test-19cc10d3-d4c9-4786-b460-de86fd3d9f51 STEP: Creating a pod to test consume secrets Feb 3 00:06:17.844: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92" in namespace "projected-5417" to be "Succeeded or Failed" Feb 3 00:06:18.008: INFO: Pod "pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92": Phase="Pending", Reason="", readiness=false. Elapsed: 164.14929ms Feb 3 00:06:20.044: INFO: Pod "pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200343613s Feb 3 00:06:22.048: INFO: Pod "pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92": Phase="Running", Reason="", readiness=true. Elapsed: 4.204048982s Feb 3 00:06:24.052: INFO: Pod "pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208481579s STEP: Saw pod success Feb 3 00:06:24.052: INFO: Pod "pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92" satisfied condition "Succeeded or Failed" Feb 3 00:06:24.055: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92 container secret-volume-test: STEP: delete the pod Feb 3 00:06:24.070: INFO: Waiting for pod pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92 to disappear Feb 3 00:06:24.087: INFO: Pod pod-projected-secrets-289e417b-eaae-4bd4-b048-d72ccdaeea92 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:24.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5417" for this suite. • [SLOW TEST:6.436 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":236,"skipped":4163,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:24.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-5d7e4efe-c896-41e4-8e51-05295f0e6a31 STEP: Creating a pod to test consume configMaps Feb 3 00:06:24.196: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0" in namespace "projected-5702" to be "Succeeded or Failed" Feb 3 00:06:24.208: INFO: Pod "pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223249ms Feb 3 00:06:26.255: INFO: Pod "pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058468221s Feb 3 00:06:28.259: INFO: Pod "pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.062407206s Feb 3 00:06:30.263: INFO: Pod "pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066602855s STEP: Saw pod success Feb 3 00:06:30.263: INFO: Pod "pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0" satisfied condition "Succeeded or Failed" Feb 3 00:06:30.265: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0 container agnhost-container: STEP: delete the pod Feb 3 00:06:30.321: INFO: Waiting for pod pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0 to disappear Feb 3 00:06:30.323: INFO: Pod pod-projected-configmaps-1c7bd4b7-1e82-488a-bde7-0e7cbe14f2d0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:30.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5702" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":237,"skipped":4163,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:30.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:06:34.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3177" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":309,"completed":238,"skipped":4165,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:06:34.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:06:38.584: INFO: Deleting pod "var-expansion-875ddebd-bd29-4c9d-bb94-e66a52bd9573" in namespace "var-expansion-7591" Feb 3 00:06:38.591: INFO: Wait up to 5m0s for pod "var-expansion-875ddebd-bd29-4c9d-bb94-e66a52bd9573" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:07:30.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7591" for this suite. • [SLOW TEST:56.136 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":309,"completed":239,"skipped":4175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:07:30.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Feb 3 00:07:30.786: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:07:30.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2067" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":309,"completed":240,"skipped":4205,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:07:30.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test service account token: Feb 3 00:07:30.970: INFO: Waiting up to 5m0s for pod "test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff" in namespace "svcaccounts-5107" to be "Succeeded or Failed" Feb 3 00:07:30.977: INFO: Pod "test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.817373ms Feb 3 00:07:32.983: INFO: Pod "test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012302968s Feb 3 00:07:34.987: INFO: Pod "test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016636637s STEP: Saw pod success Feb 3 00:07:34.987: INFO: Pod "test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff" satisfied condition "Succeeded or Failed" Feb 3 00:07:34.990: INFO: Trying to get logs from node leguer-worker pod test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff container agnhost-container: STEP: delete the pod Feb 3 00:07:35.252: INFO: Waiting for pod test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff to disappear Feb 3 00:07:35.290: INFO: Pod test-pod-fd84fe05-f0eb-41da-95de-96d38dd3bbff no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:07:35.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5107" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":309,"completed":241,"skipped":4209,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:07:35.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Feb 3 00:07:35.443: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Feb 3 00:07:35.447: INFO: starting watch STEP: patching STEP: updating Feb 3 00:07:35.458: INFO: waiting for watch events with expected annotations Feb 3 00:07:35.458: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:07:35.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-8732" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":309,"completed":242,"skipped":4221,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:07:35.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 3 00:07:35.658: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76" in namespace "projected-4156" to be "Succeeded or Failed" Feb 3 00:07:35.700: INFO: Pod "downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76": Phase="Pending", Reason="", readiness=false. Elapsed: 42.057577ms Feb 3 00:07:37.704: INFO: Pod "downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045833452s Feb 3 00:07:39.708: INFO: Pod "downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050272911s STEP: Saw pod success Feb 3 00:07:39.708: INFO: Pod "downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76" satisfied condition "Succeeded or Failed" Feb 3 00:07:39.711: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76 container client-container: STEP: delete the pod Feb 3 00:07:39.727: INFO: Waiting for pod downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76 to disappear Feb 3 00:07:39.760: INFO: Pod downwardapi-volume-f98de971-9a3f-4dee-8153-e10c34555a76 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:07:39.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4156" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":243,"skipped":4224,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:07:39.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-rhdq STEP: Creating a pod to test atomic-volume-subpath Feb 3 00:07:39.892: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rhdq" in namespace "subpath-4751" to be "Succeeded or Failed" Feb 3 00:07:39.938: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Pending", Reason="", readiness=false. Elapsed: 46.036142ms Feb 3 00:07:41.942: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049667994s Feb 3 00:07:43.946: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 4.054333806s Feb 3 00:07:45.962: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 6.069456946s Feb 3 00:07:47.966: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 8.073596672s Feb 3 00:07:49.977: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 10.084436819s Feb 3 00:07:51.986: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 12.093899431s Feb 3 00:07:53.990: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 14.098033763s Feb 3 00:07:55.994: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 16.102086822s Feb 3 00:07:58.004: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 18.111480281s Feb 3 00:08:00.008: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 20.115425298s Feb 3 00:08:02.012: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Running", Reason="", readiness=true. Elapsed: 22.120000446s Feb 3 00:08:04.017: INFO: Pod "pod-subpath-test-configmap-rhdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124842981s STEP: Saw pod success Feb 3 00:08:04.017: INFO: Pod "pod-subpath-test-configmap-rhdq" satisfied condition "Succeeded or Failed" Feb 3 00:08:04.029: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-rhdq container test-container-subpath-configmap-rhdq: STEP: delete the pod Feb 3 00:08:04.078: INFO: Waiting for pod pod-subpath-test-configmap-rhdq to disappear Feb 3 00:08:04.109: INFO: Pod pod-subpath-test-configmap-rhdq no longer exists STEP: Deleting pod pod-subpath-test-configmap-rhdq Feb 3 00:08:04.109: INFO: Deleting pod "pod-subpath-test-configmap-rhdq" in namespace "subpath-4751" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:08:04.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4751" for this suite. • [SLOW TEST:24.352 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":309,"completed":244,"skipped":4239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:08:04.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:09:04.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-646" for this suite. • [SLOW TEST:60.107 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":309,"completed":245,"skipped":4263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:09:04.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Feb 3 00:09:04.327: INFO: observed Pod pod-test in namespace pods-6912 in phase Pending conditions [] Feb 3 00:09:04.388: INFO: observed Pod pod-test in namespace pods-6912 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 00:09:04 +0000 UTC }] Feb 3 00:09:04.405: INFO: observed Pod pod-test in namespace pods-6912 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 00:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 00:09:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-03 00:09:04 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-03 00:09:04 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Feb 3 00:09:07.761: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Feb 3 00:09:07.829: INFO: observed event type ADDED Feb 3 00:09:07.829: INFO: observed event type MODIFIED Feb 3 00:09:07.829: INFO: observed event type MODIFIED Feb 3 00:09:07.829: INFO: observed event type MODIFIED Feb 3 00:09:07.829: INFO: observed event type MODIFIED Feb 3 00:09:07.829: INFO: observed event type MODIFIED Feb 3 00:09:07.830: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:09:07.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6912" for this suite. •{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":309,"completed":246,"skipped":4324,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:09:07.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8740 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8740 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8740 Feb 3 00:09:08.221: INFO: Found 0 stateful pods, waiting for 1 Feb 3 00:09:18.226: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 3 00:09:18.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 3 00:09:21.469: INFO: stderr: "I0203 00:09:21.331816 3134 log.go:181] (0xc00093ae70) (0xc0008803c0) Create stream\nI0203 00:09:21.331877 3134 log.go:181] (0xc00093ae70) (0xc0008803c0) Stream added, broadcasting: 1\nI0203 00:09:21.334392 3134 log.go:181] (0xc00093ae70) Reply frame received for 1\nI0203 00:09:21.334437 3134 log.go:181] (0xc00093ae70) (0xc000880460) Create stream\nI0203 00:09:21.334448 3134 log.go:181] (0xc00093ae70) (0xc000880460) Stream added, broadcasting: 3\nI0203 00:09:21.335542 3134 log.go:181] (0xc00093ae70) Reply frame received for 3\nI0203 00:09:21.335595 3134 log.go:181] (0xc00093ae70) (0xc000c46000) Create stream\nI0203 00:09:21.335622 3134 log.go:181] (0xc00093ae70) (0xc000c46000) Stream added, broadcasting: 5\nI0203 00:09:21.336630 3134 log.go:181] (0xc00093ae70) Reply frame received for 5\nI0203 00:09:21.426167 3134 log.go:181] (0xc00093ae70) Data frame received for 5\nI0203 00:09:21.426201 3134 log.go:181] (0xc000c46000) (5) Data frame handling\nI0203 00:09:21.426227 3134 log.go:181] (0xc000c46000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 00:09:21.462044 3134 log.go:181] (0xc00093ae70) Data frame received for 5\nI0203 00:09:21.462097 3134 log.go:181] (0xc000c46000) (5) Data frame handling\nI0203 00:09:21.462145 3134 log.go:181] (0xc00093ae70) Data frame received for 3\nI0203 00:09:21.462171 3134 log.go:181] (0xc000880460) (3) Data frame handling\nI0203 00:09:21.462202 3134 log.go:181] (0xc000880460) (3) Data frame sent\nI0203 00:09:21.462224 3134 log.go:181] (0xc00093ae70) Data frame received for 3\nI0203 00:09:21.462242 3134 log.go:181] (0xc000880460) (3) Data frame handling\nI0203 00:09:21.464218 3134 log.go:181] (0xc00093ae70) Data frame received for 1\nI0203 00:09:21.464239 3134 log.go:181] (0xc0008803c0) (1) Data frame handling\nI0203 00:09:21.464362 3134 log.go:181] (0xc0008803c0) (1) Data frame sent\nI0203 00:09:21.464380 3134 log.go:181] (0xc00093ae70) (0xc0008803c0) Stream removed, broadcasting: 1\nI0203 00:09:21.464429 3134 log.go:181] (0xc00093ae70) Go away received\nI0203 00:09:21.464726 3134 log.go:181] (0xc00093ae70) (0xc0008803c0) Stream removed, broadcasting: 1\nI0203 00:09:21.464743 3134 log.go:181] (0xc00093ae70) (0xc000880460) Stream removed, broadcasting: 3\nI0203 00:09:21.464752 3134 log.go:181] (0xc00093ae70) (0xc000c46000) Stream removed, broadcasting: 5\n" Feb 3 00:09:21.469: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 3 00:09:21.469: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 3 00:09:21.473: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 3 00:09:31.479: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 3 00:09:31.479: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 00:09:31.543: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999541s Feb 3 00:09:32.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.946867158s Feb 3 00:09:33.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.941750152s Feb 3 00:09:34.558: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.93643901s Feb 3 00:09:35.563: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.931817685s Feb 3 00:09:36.567: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.926792987s Feb 3 00:09:37.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.922742157s Feb 3 00:09:38.577: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.918354317s Feb 3 00:09:39.580: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.913152965s Feb 3 00:09:40.585: INFO: Verifying statefulset ss doesn't scale past 1 for another 909.317952ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8740 Feb 3 00:09:41.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:09:41.806: INFO: stderr: "I0203 00:09:41.714427 3152 log.go:181] (0xc000142370) (0xc000e10000) Create stream\nI0203 00:09:41.714484 3152 log.go:181] (0xc000142370) (0xc000e10000) Stream added, broadcasting: 1\nI0203 00:09:41.715987 3152 log.go:181] (0xc000142370) Reply frame received for 1\nI0203 00:09:41.716020 3152 log.go:181] (0xc000142370) (0xc000c341e0) Create stream\nI0203 00:09:41.716031 3152 log.go:181] (0xc000142370) (0xc000c341e0) Stream added, broadcasting: 3\nI0203 00:09:41.716714 3152 log.go:181] (0xc000142370) Reply frame received for 3\nI0203 00:09:41.716753 3152 log.go:181] (0xc000142370) (0xc00070d860) Create stream\nI0203 00:09:41.716769 3152 log.go:181] (0xc000142370) (0xc00070d860) Stream added, broadcasting: 5\nI0203 00:09:41.717515 3152 log.go:181] (0xc000142370) Reply frame received for 5\nI0203 00:09:41.798565 3152 log.go:181] (0xc000142370) Data frame received for 5\nI0203 00:09:41.798603 3152 log.go:181] (0xc00070d860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 00:09:41.798625 3152 log.go:181] (0xc000142370) Data frame received for 3\nI0203 00:09:41.798652 3152 log.go:181] (0xc000c341e0) (3) Data frame handling\nI0203 00:09:41.798666 3152 log.go:181] (0xc000c341e0) (3) Data frame sent\nI0203 00:09:41.798679 3152 log.go:181] (0xc000142370) Data frame received for 3\nI0203 00:09:41.798688 3152 log.go:181] (0xc000c341e0) (3) Data frame handling\nI0203 00:09:41.798710 3152 log.go:181] (0xc00070d860) (5) Data frame sent\nI0203 00:09:41.798720 3152 log.go:181] (0xc000142370) Data frame received for 5\nI0203 00:09:41.798731 3152 log.go:181] (0xc00070d860) (5) Data frame handling\nI0203 00:09:41.799647 3152 log.go:181] (0xc000142370) Data frame received for 1\nI0203 00:09:41.799726 3152 log.go:181] (0xc000e10000) (1) Data frame handling\nI0203 00:09:41.799799 3152 log.go:181] (0xc000e10000) (1) Data frame sent\nI0203 00:09:41.799867 3152 log.go:181] (0xc000142370) (0xc000e10000) Stream removed, broadcasting: 1\nI0203 00:09:41.799944 3152 log.go:181] (0xc000142370) Go away received\nI0203 00:09:41.800256 3152 log.go:181] (0xc000142370) (0xc000e10000) Stream removed, broadcasting: 1\nI0203 00:09:41.800271 3152 log.go:181] (0xc000142370) (0xc000c341e0) Stream removed, broadcasting: 3\nI0203 00:09:41.800277 3152 log.go:181] (0xc000142370) (0xc00070d860) Stream removed, broadcasting: 5\n" Feb 3 00:09:41.807: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 3 00:09:41.807: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 3 00:09:41.835: INFO: Found 1 stateful pods, waiting for 3 Feb 3 00:09:51.839: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 3 00:09:51.839: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 3 00:09:51.839: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 3 00:09:51.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 3 00:09:52.086: INFO: stderr: "I0203 00:09:52.002686 3170 log.go:181] (0xc00003ac60) (0xc000a363c0) Create stream\nI0203 00:09:52.002732 3170 log.go:181] (0xc00003ac60) (0xc000a363c0) Stream added, broadcasting: 1\nI0203 00:09:52.004223 3170 log.go:181] (0xc00003ac60) Reply frame received for 1\nI0203 00:09:52.004256 3170 log.go:181] (0xc00003ac60) (0xc000f0a000) Create stream\nI0203 00:09:52.004269 3170 log.go:181] (0xc00003ac60) (0xc000f0a000) Stream added, broadcasting: 3\nI0203 00:09:52.004942 3170 log.go:181] (0xc00003ac60) Reply frame received for 3\nI0203 00:09:52.004960 3170 log.go:181] (0xc00003ac60) (0xc000f0a0a0) Create stream\nI0203 00:09:52.004971 3170 log.go:181] (0xc00003ac60) (0xc000f0a0a0) Stream added, broadcasting: 5\nI0203 00:09:52.005655 3170 log.go:181] (0xc00003ac60) Reply frame received for 5\nI0203 00:09:52.075498 3170 log.go:181] (0xc00003ac60) Data frame received for 3\nI0203 00:09:52.075575 3170 log.go:181] (0xc000f0a000) (3) Data frame handling\nI0203 00:09:52.075592 3170 log.go:181] (0xc000f0a000) (3) Data frame sent\nI0203 00:09:52.075615 3170 log.go:181] (0xc00003ac60) Data frame received for 3\nI0203 00:09:52.075635 3170 log.go:181] (0xc000f0a000) (3) Data frame handling\nI0203 00:09:52.075655 3170 log.go:181] (0xc00003ac60) Data frame received for 5\nI0203 00:09:52.075678 3170 log.go:181] (0xc000f0a0a0) (5) Data frame handling\nI0203 00:09:52.075709 3170 log.go:181] (0xc000f0a0a0) (5) Data frame sent\nI0203 00:09:52.075729 3170 log.go:181] (0xc00003ac60) Data frame received for 5\nI0203 00:09:52.075740 3170 log.go:181] (0xc000f0a0a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 00:09:52.077425 3170 log.go:181] (0xc00003ac60) Data frame received for 1\nI0203 00:09:52.077453 3170 log.go:181] (0xc000a363c0) (1) Data frame handling\nI0203 00:09:52.077469 3170 log.go:181] (0xc000a363c0) (1) Data frame sent\nI0203 00:09:52.077620 3170 log.go:181] (0xc00003ac60) (0xc000a363c0) Stream removed, broadcasting: 1\nI0203 00:09:52.077703 3170 log.go:181] (0xc00003ac60) Go away received\nI0203 00:09:52.078007 3170 log.go:181] (0xc00003ac60) (0xc000a363c0) Stream removed, broadcasting: 1\nI0203 00:09:52.078033 3170 log.go:181] (0xc00003ac60) (0xc000f0a000) Stream removed, broadcasting: 3\nI0203 00:09:52.078048 3170 log.go:181] (0xc00003ac60) (0xc000f0a0a0) Stream removed, broadcasting: 5\n" Feb 3 00:09:52.086: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 3 00:09:52.086: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 3 00:09:52.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 3 00:09:52.320: INFO: stderr: "I0203 00:09:52.216400 3188 log.go:181] (0xc000143080) (0xc00076a3c0) Create stream\nI0203 00:09:52.216450 3188 log.go:181] (0xc000143080) (0xc00076a3c0) Stream added, broadcasting: 1\nI0203 00:09:52.218322 3188 log.go:181] (0xc000143080) Reply frame received for 1\nI0203 00:09:52.218349 3188 log.go:181] (0xc000143080) (0xc00014cfa0) Create stream\nI0203 00:09:52.218356 3188 log.go:181] (0xc000143080) (0xc00014cfa0) Stream added, broadcasting: 3\nI0203 00:09:52.219150 3188 log.go:181] (0xc000143080) Reply frame received for 3\nI0203 00:09:52.219172 3188 log.go:181] (0xc000143080) (0xc0001a8e60) Create stream\nI0203 00:09:52.219179 3188 log.go:181] (0xc000143080) (0xc0001a8e60) Stream added, broadcasting: 5\nI0203 00:09:52.220174 3188 log.go:181] (0xc000143080) Reply frame received for 5\nI0203 00:09:52.279229 3188 log.go:181] (0xc000143080) Data frame received for 5\nI0203 00:09:52.279262 3188 log.go:181] (0xc0001a8e60) (5) Data frame handling\nI0203 00:09:52.279286 3188 log.go:181] (0xc0001a8e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 00:09:52.310675 3188 log.go:181] (0xc000143080) Data frame received for 3\nI0203 00:09:52.310719 3188 log.go:181] (0xc00014cfa0) (3) Data frame handling\nI0203 00:09:52.310750 3188 log.go:181] (0xc00014cfa0) (3) Data frame sent\nI0203 00:09:52.311046 3188 log.go:181] (0xc000143080) Data frame received for 5\nI0203 00:09:52.311081 3188 log.go:181] (0xc0001a8e60) (5) Data frame handling\nI0203 00:09:52.311164 3188 log.go:181] (0xc000143080) Data frame received for 3\nI0203 00:09:52.311186 3188 log.go:181] (0xc00014cfa0) (3) Data frame handling\nI0203 00:09:52.312947 3188 log.go:181] (0xc000143080) Data frame received for 1\nI0203 00:09:52.313064 3188 log.go:181] (0xc00076a3c0) (1) Data frame handling\nI0203 00:09:52.313113 3188 log.go:181] (0xc00076a3c0) (1) Data frame sent\nI0203 00:09:52.313147 3188 log.go:181] (0xc000143080) (0xc00076a3c0) Stream removed, broadcasting: 1\nI0203 00:09:52.313173 3188 log.go:181] (0xc000143080) Go away received\nI0203 00:09:52.313713 3188 log.go:181] (0xc000143080) (0xc00076a3c0) Stream removed, broadcasting: 1\nI0203 00:09:52.313739 3188 log.go:181] (0xc000143080) (0xc00014cfa0) Stream removed, broadcasting: 3\nI0203 00:09:52.313753 3188 log.go:181] (0xc000143080) (0xc0001a8e60) Stream removed, broadcasting: 5\n" Feb 3 00:09:52.320: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 3 00:09:52.320: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 3 00:09:52.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 3 00:09:52.559: INFO: stderr: "I0203 00:09:52.449362 3206 log.go:181] (0xc00018d130) (0xc000bfe3c0) Create stream\nI0203 00:09:52.449423 3206 log.go:181] (0xc00018d130) (0xc000bfe3c0) Stream added, broadcasting: 1\nI0203 00:09:52.451416 3206 log.go:181] (0xc00018d130) Reply frame received for 1\nI0203 00:09:52.451466 3206 log.go:181] (0xc00018d130) (0xc00089a000) Create stream\nI0203 00:09:52.451484 3206 log.go:181] (0xc00018d130) (0xc00089a000) Stream added, broadcasting: 3\nI0203 00:09:52.452411 3206 log.go:181] (0xc00018d130) Reply frame received for 3\nI0203 00:09:52.452458 3206 log.go:181] (0xc00018d130) (0xc000bfe460) Create stream\nI0203 00:09:52.452472 3206 log.go:181] (0xc00018d130) (0xc000bfe460) Stream added, broadcasting: 5\nI0203 00:09:52.453578 3206 log.go:181] (0xc00018d130) Reply frame received for 5\nI0203 00:09:52.521155 3206 log.go:181] (0xc00018d130) Data frame received for 5\nI0203 00:09:52.521185 3206 log.go:181] (0xc000bfe460) (5) Data frame handling\nI0203 00:09:52.521203 3206 log.go:181] (0xc000bfe460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0203 00:09:52.550587 3206 log.go:181] (0xc00018d130) Data frame received for 3\nI0203 00:09:52.550638 3206 log.go:181] (0xc00089a000) (3) Data frame handling\nI0203 00:09:52.550647 3206 log.go:181] (0xc00089a000) (3) Data frame sent\nI0203 00:09:52.550652 3206 log.go:181] (0xc00018d130) Data frame received for 3\nI0203 00:09:52.550656 3206 log.go:181] (0xc00089a000) (3) Data frame handling\nI0203 00:09:52.550678 3206 log.go:181] (0xc00018d130) Data frame received for 5\nI0203 00:09:52.550700 3206 log.go:181] (0xc000bfe460) (5) Data frame handling\nI0203 00:09:52.552987 3206 log.go:181] (0xc00018d130) Data frame received for 1\nI0203 00:09:52.553021 3206 log.go:181] (0xc000bfe3c0) (1) Data frame handling\nI0203 00:09:52.553046 3206 log.go:181] (0xc000bfe3c0) (1) Data frame sent\nI0203 00:09:52.553071 3206 log.go:181] (0xc00018d130) (0xc000bfe3c0) Stream removed, broadcasting: 1\nI0203 00:09:52.553115 3206 log.go:181] (0xc00018d130) Go away received\nI0203 00:09:52.553545 3206 log.go:181] (0xc00018d130) (0xc000bfe3c0) Stream removed, broadcasting: 1\nI0203 00:09:52.553564 3206 log.go:181] (0xc00018d130) (0xc00089a000) Stream removed, broadcasting: 3\nI0203 00:09:52.553574 3206 log.go:181] (0xc00018d130) (0xc000bfe460) Stream removed, broadcasting: 5\n" Feb 3 00:09:52.559: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 3 00:09:52.559: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 3 00:09:52.559: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 00:09:52.563: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Feb 3 00:10:02.572: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 3 00:10:02.572: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 3 00:10:02.572: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 3 00:10:02.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999275s Feb 3 00:10:03.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995833099s Feb 3 00:10:04.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990498235s Feb 3 00:10:05.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984148984s Feb 3 00:10:06.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978275386s Feb 3 00:10:07.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971243789s Feb 3 00:10:08.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966568022s Feb 3 00:10:09.627: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960060872s Feb 3 00:10:10.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955102031s Feb 3 00:10:11.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.851424ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8740 Feb 3 00:10:12.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:12.861: INFO: stderr: "I0203 00:10:12.778814 3224 log.go:181] (0xc0001431e0) (0xc000e84000) Create stream\nI0203 00:10:12.778870 3224 log.go:181] (0xc0001431e0) (0xc000e84000) Stream added, broadcasting: 1\nI0203 00:10:12.781125 3224 log.go:181] (0xc0001431e0) Reply frame received for 1\nI0203 00:10:12.781158 3224 log.go:181] (0xc0001431e0) (0xc000e840a0) Create stream\nI0203 00:10:12.781167 3224 log.go:181] (0xc0001431e0) (0xc000e840a0) Stream added, broadcasting: 3\nI0203 00:10:12.782095 3224 log.go:181] (0xc0001431e0) Reply frame received for 3\nI0203 00:10:12.782131 3224 log.go:181] (0xc0001431e0) (0xc000c2c1e0) Create stream\nI0203 00:10:12.782141 3224 log.go:181] (0xc0001431e0) (0xc000c2c1e0) Stream added, broadcasting: 5\nI0203 00:10:12.783087 3224 log.go:181] (0xc0001431e0) Reply frame received for 5\nI0203 00:10:12.851117 3224 log.go:181] (0xc0001431e0) Data frame received for 5\nI0203 00:10:12.851168 3224 log.go:181] (0xc000c2c1e0) (5) Data frame handling\nI0203 00:10:12.851188 3224 log.go:181] (0xc000c2c1e0) (5) Data frame sent\nI0203 00:10:12.851205 3224 log.go:181] (0xc0001431e0) Data frame received for 5\nI0203 00:10:12.851217 3224 log.go:181] (0xc000c2c1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 00:10:12.851242 3224 log.go:181] (0xc0001431e0) Data frame received for 3\nI0203 00:10:12.851259 3224 log.go:181] (0xc000e840a0) (3) Data frame handling\nI0203 00:10:12.851273 3224 log.go:181] (0xc000e840a0) (3) Data frame sent\nI0203 00:10:12.851280 3224 log.go:181] (0xc0001431e0) Data frame received for 3\nI0203 00:10:12.851287 3224 log.go:181] (0xc000e840a0) (3) Data frame handling\nI0203 00:10:12.852481 3224 log.go:181] (0xc0001431e0) Data frame received for 1\nI0203 00:10:12.852504 3224 log.go:181] (0xc000e84000) (1) Data frame handling\nI0203 00:10:12.852517 3224 log.go:181] (0xc000e84000) (1) Data frame sent\nI0203 00:10:12.852633 3224 log.go:181] (0xc0001431e0) (0xc000e84000) Stream removed, broadcasting: 1\nI0203 00:10:12.852662 3224 log.go:181] (0xc0001431e0) Go away received\nI0203 00:10:12.853212 3224 log.go:181] (0xc0001431e0) (0xc000e84000) Stream removed, broadcasting: 1\nI0203 00:10:12.853234 3224 log.go:181] (0xc0001431e0) (0xc000e840a0) Stream removed, broadcasting: 3\nI0203 00:10:12.853245 3224 log.go:181] (0xc0001431e0) (0xc000c2c1e0) Stream removed, broadcasting: 5\n" Feb 3 00:10:12.861: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 3 00:10:12.861: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 3 00:10:12.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:13.148: INFO: stderr: "I0203 00:10:13.066447 3242 log.go:181] (0xc001000210) (0xc000bf4460) Create stream\nI0203 00:10:13.066506 3242 log.go:181] (0xc001000210) (0xc000bf4460) Stream added, broadcasting: 1\nI0203 00:10:13.069360 3242 log.go:181] (0xc001000210) Reply frame received for 1\nI0203 00:10:13.069393 3242 log.go:181] (0xc001000210) (0xc000bf4000) Create stream\nI0203 00:10:13.069402 3242 log.go:181] (0xc001000210) (0xc000bf4000) Stream added, broadcasting: 3\nI0203 00:10:13.070139 3242 log.go:181] (0xc001000210) Reply frame received for 3\nI0203 00:10:13.070168 3242 log.go:181] (0xc001000210) (0xc000542000) Create stream\nI0203 00:10:13.070176 3242 log.go:181] (0xc001000210) (0xc000542000) Stream added, broadcasting: 5\nI0203 00:10:13.070765 3242 log.go:181] (0xc001000210) Reply frame received for 5\nI0203 00:10:13.140451 3242 log.go:181] (0xc001000210) Data frame received for 5\nI0203 00:10:13.140474 3242 log.go:181] (0xc000542000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0203 00:10:13.140508 3242 log.go:181] (0xc001000210) Data frame received for 3\nI0203 00:10:13.140552 3242 log.go:181] (0xc000bf4000) (3) Data frame handling\nI0203 00:10:13.140586 3242 log.go:181] (0xc000bf4000) (3) Data frame sent\nI0203 00:10:13.140611 3242 log.go:181] (0xc001000210) Data frame received for 3\nI0203 00:10:13.140629 3242 log.go:181] (0xc000bf4000) (3) Data frame handling\nI0203 00:10:13.140644 3242 log.go:181] (0xc000542000) (5) Data frame sent\nI0203 00:10:13.140656 3242 log.go:181] (0xc001000210) Data frame received for 5\nI0203 00:10:13.140666 3242 log.go:181] (0xc000542000) (5) Data frame handling\nI0203 00:10:13.142462 3242 log.go:181] (0xc001000210) Data frame received for 1\nI0203 00:10:13.142514 3242 log.go:181] (0xc000bf4460) (1) Data frame handling\nI0203 00:10:13.142540 3242 log.go:181] (0xc000bf4460) (1) Data frame sent\nI0203 00:10:13.142561 3242 log.go:181] (0xc001000210) (0xc000bf4460) Stream removed, broadcasting: 1\nI0203 00:10:13.142587 3242 log.go:181] (0xc001000210) Go away received\nI0203 00:10:13.142834 3242 log.go:181] (0xc001000210) (0xc000bf4460) Stream removed, broadcasting: 1\nI0203 00:10:13.142850 3242 log.go:181] (0xc001000210) (0xc000bf4000) Stream removed, broadcasting: 3\nI0203 00:10:13.142855 3242 log.go:181] (0xc001000210) (0xc000542000) Stream removed, broadcasting: 5\n" Feb 3 00:10:13.148: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 3 00:10:13.148: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 3 00:10:13.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:13.839: INFO: rc: 1 Feb 3 00:10:13.839: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0203 00:10:13.386693 3260 log.go:181] (0xc0001b1340) (0xc000816a00) Create stream I0203 00:10:13.386769 3260 log.go:181] (0xc0001b1340) (0xc000816a00) Stream added, broadcasting: 1 I0203 00:10:13.388637 3260 log.go:181] (0xc0001b1340) Reply frame received for 1 I0203 00:10:13.388718 3260 log.go:181] (0xc0001b1340) (0xc00031e280) Create stream I0203 00:10:13.388770 3260 log.go:181] (0xc0001b1340) (0xc00031e280) Stream added, broadcasting: 3 I0203 00:10:13.389976 3260 log.go:181] (0xc0001b1340) Reply frame received for 3 I0203 00:10:13.390009 3260 log.go:181] (0xc0001b1340) (0xc000816aa0) Create stream I0203 00:10:13.390018 3260 log.go:181] (0xc0001b1340) (0xc000816aa0) Stream added, broadcasting: 5 I0203 00:10:13.390870 3260 log.go:181] (0xc0001b1340) Reply frame received for 5 I0203 00:10:13.831914 3260 log.go:181] (0xc0001b1340) Data frame received for 1 I0203 00:10:13.831937 3260 log.go:181] (0xc000816a00) (1) Data frame handling I0203 00:10:13.831948 3260 log.go:181] (0xc000816a00) (1) Data frame sent I0203 00:10:13.831999 3260 log.go:181] (0xc0001b1340) (0xc000816a00) Stream removed, broadcasting: 1 I0203 00:10:13.832263 3260 log.go:181] (0xc0001b1340) (0xc00031e280) Stream removed, broadcasting: 3 I0203 00:10:13.832330 3260 log.go:181] (0xc0001b1340) (0xc000816aa0) Stream removed, broadcasting: 5 I0203 00:10:13.832396 3260 log.go:181] (0xc0001b1340) Go away received I0203 00:10:13.833299 3260 log.go:181] (0xc0001b1340) (0xc000816a00) Stream removed, broadcasting: 1 I0203 00:10:13.833312 3260 log.go:181] (0xc0001b1340) (0xc00031e280) Stream removed, broadcasting: 3 I0203 00:10:13.833317 3260 log.go:181] (0xc0001b1340) (0xc000816aa0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "574f3bdb866ec777a227178ac99e68854265038019962e87e679a7c7352ae4a1": task 304c7f7452738a5ae374cd23eb177104c4faaff6ac47b75b5f2ccd4b4142053f not found: not found error: exit status 1 Feb 3 00:10:23.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:23.990: INFO: rc: 1 Feb 3 00:10:23.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 3 00:10:33.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:34.093: INFO: rc: 1 Feb 3 00:10:34.093: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:10:44.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:44.205: INFO: rc: 1 Feb 3 00:10:44.205: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:10:54.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:10:54.306: INFO: rc: 1 Feb 3 00:10:54.306: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:11:04.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:11:04.407: INFO: rc: 1 Feb 3 00:11:04.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:11:14.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:11:14.507: INFO: rc: 1 Feb 3 00:11:14.507: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:11:24.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:11:24.606: INFO: rc: 1 Feb 3 00:11:24.606: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:11:34.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:11:34.703: INFO: rc: 1 Feb 3 00:11:34.703: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:11:44.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:11:44.808: INFO: rc: 1 Feb 3 00:11:44.808: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:11:54.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:11:54.905: INFO: rc: 1 Feb 3 00:11:54.905: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:12:04.905: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:12:05.000: INFO: rc: 1 Feb 3 00:12:05.000: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:12:15.000: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:12:15.103: INFO: rc: 1 Feb 3 00:12:15.103: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:12:25.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:12:25.200: INFO: rc: 1 Feb 3 00:12:25.200: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:12:35.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:12:35.302: INFO: rc: 1 Feb 3 00:12:35.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:12:45.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:12:45.403: INFO: rc: 1 Feb 3 00:12:45.403: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:12:55.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:12:55.507: INFO: rc: 1 Feb 3 00:12:55.507: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:13:05.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:13:05.605: INFO: rc: 1 Feb 3 00:13:05.605: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:13:15.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:13:15.718: INFO: rc: 1 Feb 3 00:13:15.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:13:25.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:13:25.828: INFO: rc: 1 Feb 3 00:13:25.828: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:13:35.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:13:35.925: INFO: rc: 1 Feb 3 00:13:35.925: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:13:45.926: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:13:46.030: INFO: rc: 1 Feb 3 00:13:46.030: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:13:56.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:13:56.132: INFO: rc: 1 Feb 3 00:13:56.132: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:14:06.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:14:06.248: INFO: rc: 1 Feb 3 00:14:06.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:14:16.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:14:16.353: INFO: rc: 1 Feb 3 00:14:16.353: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:14:26.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:14:26.457: INFO: rc: 1 Feb 3 00:14:26.457: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:14:36.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:14:36.555: INFO: rc: 1 Feb 3 00:14:36.555: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:14:46.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:14:46.663: INFO: rc: 1 Feb 3 00:14:46.663: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:14:56.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:14:56.778: INFO: rc: 1 Feb 3 00:14:56.778: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:15:06.778: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:15:06.881: INFO: rc: 1 Feb 3 00:15:06.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 3 00:15:16.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8740 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 3 00:15:16.989: INFO: rc: 1 Feb 3 00:15:16.990: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Feb 3 00:15:16.990: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 3 00:15:17.000: INFO: Deleting all statefulset in ns statefulset-8740 Feb 3 00:15:17.002: INFO: Scaling statefulset ss to 0 Feb 3 00:15:17.010: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 00:15:17.012: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:15:17.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8740" for this suite. • [SLOW TEST:369.203 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":309,"completed":247,"skipped":4332,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:15:17.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0203 00:15:18.405239 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 3 00:16:20.452: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:16:20.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9251" for this suite. • [SLOW TEST:63.417 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":309,"completed":248,"skipped":4335,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:16:20.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-172 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 00:16:20.599: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 3 00:16:20.642: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 3 00:16:22.646: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 3 00:16:24.647: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 3 00:16:26.918: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:28.648: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:30.647: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:32.648: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:34.647: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:36.647: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:38.647: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 3 00:16:40.647: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 3 00:16:40.653: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 3 00:16:44.728: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 3 00:16:44.728: INFO: Going to poll 10.244.2.176 on port 8081 at least 0 times, with a maximum of 34 tries before failing Feb 3 00:16:44.730: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.176 8081 | grep -v '^\s*$'] Namespace:pod-network-test-172 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:16:44.730: INFO: >>> kubeConfig: /root/.kube/config I0203 00:16:44.772133 7 log.go:181] (0xc005d4c420) (0xc000ac52c0) Create stream I0203 00:16:44.772165 7 log.go:181] (0xc005d4c420) (0xc000ac52c0) Stream added, broadcasting: 1 I0203 00:16:44.774055 7 log.go:181] (0xc005d4c420) Reply frame received for 1 I0203 00:16:44.774091 7 log.go:181] (0xc005d4c420) (0xc0011aeaa0) Create stream I0203 00:16:44.774106 7 log.go:181] (0xc005d4c420) (0xc0011aeaa0) Stream added, broadcasting: 3 I0203 00:16:44.775169 7 log.go:181] (0xc005d4c420) Reply frame received for 3 I0203 00:16:44.775207 7 log.go:181] (0xc005d4c420) (0xc0073312c0) Create stream I0203 00:16:44.775215 7 log.go:181] (0xc005d4c420) (0xc0073312c0) Stream added, broadcasting: 5 I0203 00:16:44.775978 7 log.go:181] (0xc005d4c420) Reply frame received for 5 I0203 00:16:45.845798 7 log.go:181] (0xc005d4c420) Data frame received for 5 I0203 00:16:45.845843 7 log.go:181] (0xc0073312c0) (5) Data frame handling I0203 00:16:45.845888 7 log.go:181] (0xc005d4c420) Data frame received for 3 I0203 00:16:45.845921 7 log.go:181] (0xc0011aeaa0) (3) Data frame handling I0203 00:16:45.845944 7 log.go:181] (0xc0011aeaa0) (3) Data frame sent I0203 00:16:45.845964 7 log.go:181] (0xc005d4c420) Data frame received for 3 I0203 00:16:45.846006 7 log.go:181] (0xc0011aeaa0) (3) Data frame handling I0203 00:16:45.848299 7 log.go:181] (0xc005d4c420) Data frame received for 1 I0203 00:16:45.848327 7 log.go:181] (0xc000ac52c0) (1) Data frame handling I0203 00:16:45.848371 7 log.go:181] (0xc000ac52c0) (1) Data frame sent I0203 00:16:45.848394 7 log.go:181] (0xc005d4c420) (0xc000ac52c0) Stream removed, broadcasting: 1 I0203 00:16:45.848412 7 log.go:181] (0xc005d4c420) Go away received I0203 00:16:45.848546 7 log.go:181] (0xc005d4c420) (0xc000ac52c0) Stream removed, broadcasting: 1 I0203 00:16:45.848572 7 log.go:181] (0xc005d4c420) (0xc0011aeaa0) Stream removed, broadcasting: 3 I0203 00:16:45.848587 7 log.go:181] (0xc005d4c420) (0xc0073312c0) Stream removed, broadcasting: 5 Feb 3 00:16:45.848: INFO: Found all 1 expected endpoints: [netserver-0] Feb 3 00:16:45.848: INFO: Going to poll 10.244.1.17 on port 8081 at least 0 times, with a maximum of 34 tries before failing Feb 3 00:16:45.853: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.17 8081 | grep -v '^\s*$'] Namespace:pod-network-test-172 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:16:45.853: INFO: >>> kubeConfig: /root/.kube/config I0203 00:16:45.882408 7 log.go:181] (0xc000816fd0) (0xc007331720) Create stream I0203 00:16:45.882427 7 log.go:181] (0xc000816fd0) (0xc007331720) Stream added, broadcasting: 1 I0203 00:16:45.884376 7 log.go:181] (0xc000816fd0) Reply frame received for 1 I0203 00:16:45.884435 7 log.go:181] (0xc000816fd0) (0xc0011aebe0) Create stream I0203 00:16:45.884450 7 log.go:181] (0xc000816fd0) (0xc0011aebe0) Stream added, broadcasting: 3 I0203 00:16:45.885601 7 log.go:181] (0xc000816fd0) Reply frame received for 3 I0203 00:16:45.885643 7 log.go:181] (0xc000816fd0) (0xc000ac54a0) Create stream I0203 00:16:45.885657 7 log.go:181] (0xc000816fd0) (0xc000ac54a0) Stream added, broadcasting: 5 I0203 00:16:45.886594 7 log.go:181] (0xc000816fd0) Reply frame received for 5 I0203 00:16:46.974241 7 log.go:181] (0xc000816fd0) Data frame received for 3 I0203 00:16:46.974353 7 log.go:181] (0xc0011aebe0) (3) Data frame handling I0203 00:16:46.974391 7 log.go:181] (0xc0011aebe0) (3) Data frame sent I0203 00:16:46.974418 7 log.go:181] (0xc000816fd0) Data frame received for 3 I0203 00:16:46.974483 7 log.go:181] (0xc000816fd0) Data frame received for 5 I0203 00:16:46.974535 7 log.go:181] (0xc000ac54a0) (5) Data frame handling I0203 00:16:46.974573 7 log.go:181] (0xc0011aebe0) (3) Data frame handling I0203 00:16:46.977220 7 log.go:181] (0xc000816fd0) Data frame received for 1 I0203 00:16:46.977246 7 log.go:181] (0xc007331720) (1) Data frame handling I0203 00:16:46.977262 7 log.go:181] (0xc007331720) (1) Data frame sent I0203 00:16:46.977397 7 log.go:181] (0xc000816fd0) (0xc007331720) Stream removed, broadcasting: 1 I0203 00:16:46.977477 7 log.go:181] (0xc000816fd0) Go away received I0203 00:16:46.977523 7 log.go:181] (0xc000816fd0) (0xc007331720) Stream removed, broadcasting: 1 I0203 00:16:46.977548 7 log.go:181] (0xc000816fd0) (0xc0011aebe0) Stream removed, broadcasting: 3 I0203 00:16:46.977560 7 log.go:181] (0xc000816fd0) (0xc000ac54a0) Stream removed, broadcasting: 5 Feb 3 00:16:46.977: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:16:46.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-172" for this suite. • [SLOW TEST:26.523 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":249,"skipped":4344,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:16:46.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 3 00:16:47.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af" in namespace "downward-api-3280" to be "Succeeded or Failed" Feb 3 00:16:47.194: INFO: Pod "downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af": Phase="Pending", Reason="", readiness=false. Elapsed: 54.906298ms Feb 3 00:16:49.199: INFO: Pod "downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059473802s Feb 3 00:16:51.206: INFO: Pod "downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066279319s STEP: Saw pod success Feb 3 00:16:51.206: INFO: Pod "downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af" satisfied condition "Succeeded or Failed" Feb 3 00:16:51.209: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af container client-container: STEP: delete the pod Feb 3 00:16:51.357: INFO: Waiting for pod downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af to disappear Feb 3 00:16:51.360: INFO: Pod downwardapi-volume-14345213-4a77-4968-a2e8-b6eaabf536af no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:16:51.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3280" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":250,"skipped":4347,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:16:51.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:16:51.457: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:16:57.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7524" for this suite. • [SLOW TEST:6.164 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":309,"completed":251,"skipped":4352,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:16:57.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 00:16:58.470: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 00:17:00.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908218, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908218, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908218, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908218, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 00:17:03.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:17:03.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-682" for this suite. STEP: Destroying namespace "webhook-682-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.274 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":309,"completed":252,"skipped":4372,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:17:03.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 3 00:17:03.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9" in namespace "downward-api-2510" to be "Succeeded or Failed" Feb 3 00:17:03.965: INFO: Pod "downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.315068ms Feb 3 00:17:06.056: INFO: Pod "downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102709494s Feb 3 00:17:08.060: INFO: Pod "downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.106259862s Feb 3 00:17:10.081: INFO: Pod "downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127110311s STEP: Saw pod success Feb 3 00:17:10.081: INFO: Pod "downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9" satisfied condition "Succeeded or Failed" Feb 3 00:17:10.085: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9 container client-container: STEP: delete the pod Feb 3 00:17:10.123: INFO: Waiting for pod downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9 to disappear Feb 3 00:17:10.158: INFO: Pod downwardapi-volume-17a2629d-2ae1-493f-866e-8f4119e0d2f9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:17:10.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2510" for this suite. • [SLOW TEST:6.363 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":253,"skipped":4373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:17:10.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Feb 3 00:17:10.304: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe" in namespace "projected-2787" to be "Succeeded or Failed" Feb 3 00:17:10.307: INFO: Pod "downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.392731ms Feb 3 00:17:12.367: INFO: Pod "downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063282813s Feb 3 00:17:14.372: INFO: Pod "downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067736038s STEP: Saw pod success Feb 3 00:17:14.372: INFO: Pod "downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe" satisfied condition "Succeeded or Failed" Feb 3 00:17:14.375: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe container client-container: STEP: delete the pod Feb 3 00:17:14.548: INFO: Waiting for pod downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe to disappear Feb 3 00:17:14.596: INFO: Pod downwardapi-volume-34d76e8d-f20f-4d11-ab27-67a2cc3a27fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:17:14.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2787" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":254,"skipped":4423,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:17:14.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 3 00:17:14.852: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 00:18:14.878: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:18:14.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:18:14.992: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Feb 3 00:18:14.996: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:18:15.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8687" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:18:15.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9280" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.517 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":309,"completed":255,"skipped":4532,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:18:15.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 3 00:18:23.789: INFO: 7 pods remaining Feb 3 00:18:23.789: INFO: 0 pods has nil DeletionTimestamp Feb 3 00:18:23.789: INFO: Feb 3 00:18:25.272: INFO: 0 pods remaining Feb 3 00:18:25.272: INFO: 0 pods has nil DeletionTimestamp Feb 3 00:18:25.272: INFO: STEP: Gathering metrics W0203 00:18:26.591548 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 3 00:19:28.608: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:19:28.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8711" for this suite. • [SLOW TEST:73.493 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":309,"completed":256,"skipped":4541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:19:28.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-m8svm in namespace proxy-4796 I0203 00:19:28.879261 7 runners.go:190] Created replication controller with name: proxy-service-m8svm, namespace: proxy-4796, replica count: 1 I0203 00:19:29.929706 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 00:19:30.929939 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 00:19:31.930190 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 00:19:32.930399 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 00:19:33.930593 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 00:19:34.930842 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 00:19:35.931079 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 00:19:36.931346 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0203 00:19:37.931612 7 runners.go:190] proxy-service-m8svm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 3 00:19:37.935: INFO: setup took 9.169025928s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 3 00:19:37.947: INFO: (0) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 11.622649ms) Feb 3 00:19:37.947: INFO: (0) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 11.596111ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 12.739222ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 12.991233ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 12.878465ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 13.006967ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 13.179211ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 13.039298ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 13.078158ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 13.209668ms) Feb 3 00:19:37.948: INFO: (0) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 13.223099ms) Feb 3 00:19:37.952: INFO: (0) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 17.007891ms) Feb 3 00:19:37.952: INFO: (0) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 16.934985ms) Feb 3 00:19:37.952: INFO: (0) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 17.463326ms) Feb 3 00:19:37.952: INFO: (0) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 17.234804ms) Feb 3 00:19:37.953: INFO: (0) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test<... (200; 4.248407ms) Feb 3 00:19:37.958: INFO: (1) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 4.101247ms) Feb 3 00:19:37.958: INFO: (1) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 4.199313ms) Feb 3 00:19:37.958: INFO: (1) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.11542ms) Feb 3 00:19:37.958: INFO: (1) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 4.102497ms) Feb 3 00:19:37.958: INFO: (1) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 4.635503ms) Feb 3 00:19:37.959: INFO: (1) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 5.65496ms) Feb 3 00:19:37.959: INFO: (1) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 5.65076ms) Feb 3 00:19:37.959: INFO: (1) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 5.831932ms) Feb 3 00:19:37.959: INFO: (1) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 5.799236ms) Feb 3 00:19:37.959: INFO: (1) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 5.854845ms) Feb 3 00:19:37.959: INFO: (1) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 5.856796ms) Feb 3 00:19:37.962: INFO: (2) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 2.584331ms) Feb 3 00:19:37.962: INFO: (2) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 2.675024ms) Feb 3 00:19:37.963: INFO: (2) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 2.944417ms) Feb 3 00:19:37.963: INFO: (2) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.37781ms) Feb 3 00:19:37.964: INFO: (2) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 3.630354ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 4.591061ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 4.355153ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 4.151794ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 4.491577ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.265ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 5.460657ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 4.763208ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 5.333189ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test (200; 4.804814ms) Feb 3 00:19:37.965: INFO: (2) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 5.23371ms) Feb 3 00:19:37.967: INFO: (3) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 2.07264ms) Feb 3 00:19:37.968: INFO: (3) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 2.921203ms) Feb 3 00:19:37.972: INFO: (3) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 6.113015ms) Feb 3 00:19:37.973: INFO: (3) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 7.299002ms) Feb 3 00:19:37.973: INFO: (3) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 7.716072ms) Feb 3 00:19:37.973: INFO: (3) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 7.817485ms) Feb 3 00:19:37.973: INFO: (3) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 7.863269ms) Feb 3 00:19:37.973: INFO: (3) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 8.056559ms) Feb 3 00:19:37.974: INFO: (3) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 8.443074ms) Feb 3 00:19:37.974: INFO: (3) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: ... (200; 3.585049ms) Feb 3 00:19:37.980: INFO: (4) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 3.829928ms) Feb 3 00:19:37.980: INFO: (4) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 3.759088ms) Feb 3 00:19:37.980: INFO: (4) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.010539ms) Feb 3 00:19:37.981: INFO: (4) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 4.304857ms) Feb 3 00:19:37.981: INFO: (4) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 5.043873ms) Feb 3 00:19:37.981: INFO: (4) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 5.034104ms) Feb 3 00:19:37.981: INFO: (4) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 4.99586ms) Feb 3 00:19:37.982: INFO: (4) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 5.576675ms) Feb 3 00:19:37.982: INFO: (4) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.523114ms) Feb 3 00:19:37.982: INFO: (4) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 5.535129ms) Feb 3 00:19:37.982: INFO: (4) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 5.602733ms) Feb 3 00:19:37.982: INFO: (4) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 5.579344ms) Feb 3 00:19:37.982: INFO: (4) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test<... (200; 4.77517ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 4.798999ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.781647ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 4.837614ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: ... (200; 4.816898ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.320933ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.28004ms) Feb 3 00:19:37.987: INFO: (5) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 5.470456ms) Feb 3 00:19:37.988: INFO: (5) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 5.79363ms) Feb 3 00:19:37.988: INFO: (5) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 5.911571ms) Feb 3 00:19:37.988: INFO: (5) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 5.881323ms) Feb 3 00:19:37.988: INFO: (5) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 5.904171ms) Feb 3 00:19:37.988: INFO: (5) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 5.962172ms) Feb 3 00:19:37.988: INFO: (5) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 5.957339ms) Feb 3 00:19:38.016: INFO: (6) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 27.584121ms) Feb 3 00:19:38.016: INFO: (6) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 27.624207ms) Feb 3 00:19:38.016: INFO: (6) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 27.615399ms) Feb 3 00:19:38.016: INFO: (6) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 28.128896ms) Feb 3 00:19:38.017: INFO: (6) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 28.59748ms) Feb 3 00:19:38.017: INFO: (6) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 29.019044ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 29.455306ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 30.272044ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 30.31531ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 30.261391ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 30.28459ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 30.295794ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 30.259648ms) Feb 3 00:19:38.018: INFO: (6) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test (200; 6.2632ms) Feb 3 00:19:38.025: INFO: (7) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 6.325043ms) Feb 3 00:19:38.025: INFO: (7) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 6.345502ms) Feb 3 00:19:38.025: INFO: (7) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 6.351702ms) Feb 3 00:19:38.025: INFO: (7) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 6.569323ms) Feb 3 00:19:38.025: INFO: (7) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 6.7837ms) Feb 3 00:19:38.025: INFO: (7) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test<... (200; 4.045239ms) Feb 3 00:19:38.030: INFO: (8) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test (200; 4.256012ms) Feb 3 00:19:38.030: INFO: (8) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 4.588893ms) Feb 3 00:19:38.030: INFO: (8) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 4.714746ms) Feb 3 00:19:38.031: INFO: (8) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 5.275196ms) Feb 3 00:19:38.031: INFO: (8) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 5.722338ms) Feb 3 00:19:38.032: INFO: (8) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 6.028506ms) Feb 3 00:19:38.032: INFO: (8) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 6.359156ms) Feb 3 00:19:38.032: INFO: (8) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 6.593773ms) Feb 3 00:19:38.032: INFO: (8) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 6.653654ms) Feb 3 00:19:38.032: INFO: (8) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 6.62785ms) Feb 3 00:19:38.032: INFO: (8) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 6.644474ms) Feb 3 00:19:38.033: INFO: (8) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 7.116408ms) Feb 3 00:19:38.035: INFO: (9) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 2.571732ms) Feb 3 00:19:38.036: INFO: (9) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test<... (200; 3.185419ms) Feb 3 00:19:38.036: INFO: (9) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 3.201053ms) Feb 3 00:19:38.036: INFO: (9) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 3.178774ms) Feb 3 00:19:38.036: INFO: (9) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.682825ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 5.16898ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 5.245685ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 5.201688ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 5.207047ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 5.333168ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 5.290012ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 5.311169ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 5.375264ms) Feb 3 00:19:38.038: INFO: (9) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.343722ms) Feb 3 00:19:38.042: INFO: (10) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 4.15669ms) Feb 3 00:19:38.042: INFO: (10) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 4.181938ms) Feb 3 00:19:38.043: INFO: (10) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 4.402114ms) Feb 3 00:19:38.043: INFO: (10) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 4.491289ms) Feb 3 00:19:38.043: INFO: (10) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 4.997711ms) Feb 3 00:19:38.043: INFO: (10) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.13476ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 5.436786ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 5.409172ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 5.415411ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 5.44819ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.470649ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 5.481841ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 5.558252ms) Feb 3 00:19:38.044: INFO: (10) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: ... (200; 3.769757ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.824776ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 3.838849ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.073089ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 4.006066ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 4.033538ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 4.064025ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.060925ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 4.244843ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 4.287309ms) Feb 3 00:19:38.048: INFO: (11) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 4.254993ms) Feb 3 00:19:38.051: INFO: (12) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 2.366051ms) Feb 3 00:19:38.051: INFO: (12) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 2.507674ms) Feb 3 00:19:38.051: INFO: (12) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 3.172963ms) Feb 3 00:19:38.052: INFO: (12) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 3.70327ms) Feb 3 00:19:38.052: INFO: (12) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 3.882178ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 4.385688ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 4.76418ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 4.720359ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 4.705142ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.771216ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: ... (200; 4.780898ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.865735ms) Feb 3 00:19:38.053: INFO: (12) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 4.807416ms) Feb 3 00:19:38.055: INFO: (13) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 2.259533ms) Feb 3 00:19:38.057: INFO: (13) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 4.025676ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test<... (200; 4.619025ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 4.613923ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 4.895857ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 4.837295ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 4.938302ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 4.950808ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 4.845801ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.855234ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.967682ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 4.923659ms) Feb 3 00:19:38.058: INFO: (13) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 5.137074ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.221741ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 3.50446ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 3.565768ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 3.814708ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 3.810687ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.784755ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 3.753351ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 3.816593ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 3.922742ms) Feb 3 00:19:38.062: INFO: (14) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test (200; 3.931535ms) Feb 3 00:19:38.069: INFO: (15) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 4.143609ms) Feb 3 00:19:38.069: INFO: (15) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 4.36858ms) Feb 3 00:19:38.070: INFO: (15) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 4.824478ms) Feb 3 00:19:38.070: INFO: (15) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname2/proxy/: tls qux (200; 4.822779ms) Feb 3 00:19:38.070: INFO: (15) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test<... (200; 11.230427ms) Feb 3 00:19:38.082: INFO: (16) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 11.290968ms) Feb 3 00:19:38.082: INFO: (16) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test (200; 11.31487ms) Feb 3 00:19:38.082: INFO: (16) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 11.389395ms) Feb 3 00:19:38.082: INFO: (16) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 11.422123ms) Feb 3 00:19:38.082: INFO: (16) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 11.362799ms) Feb 3 00:19:38.082: INFO: (16) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 11.378747ms) Feb 3 00:19:38.086: INFO: (17) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 4.291506ms) Feb 3 00:19:38.086: INFO: (17) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.874153ms) Feb 3 00:19:38.086: INFO: (17) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.041551ms) Feb 3 00:19:38.086: INFO: (17) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 3.953185ms) Feb 3 00:19:38.086: INFO: (17) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: ... (200; 4.962749ms) Feb 3 00:19:38.087: INFO: (17) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 4.261515ms) Feb 3 00:19:38.087: INFO: (17) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 4.344346ms) Feb 3 00:19:38.087: INFO: (17) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 4.875421ms) Feb 3 00:19:38.087: INFO: (17) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 5.051544ms) Feb 3 00:19:38.087: INFO: (17) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 5.696519ms) Feb 3 00:19:38.088: INFO: (17) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 5.966258ms) Feb 3 00:19:38.092: INFO: (18) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:1080/proxy/: ... (200; 3.799981ms) Feb 3 00:19:38.092: INFO: (18) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 3.862634ms) Feb 3 00:19:38.093: INFO: (18) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.843302ms) Feb 3 00:19:38.093: INFO: (18) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:162/proxy/: bar (200; 4.856157ms) Feb 3 00:19:38.093: INFO: (18) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: test (200; 5.663504ms) Feb 3 00:19:38.093: INFO: (18) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 5.648055ms) Feb 3 00:19:38.094: INFO: (18) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname1/proxy/: foo (200; 6.27364ms) Feb 3 00:19:38.094: INFO: (18) /api/v1/namespaces/proxy-4796/services/http:proxy-service-m8svm:portname2/proxy/: bar (200; 6.243282ms) Feb 3 00:19:38.094: INFO: (18) /api/v1/namespaces/proxy-4796/services/https:proxy-service-m8svm:tlsportname1/proxy/: tls baz (200; 6.561796ms) Feb 3 00:19:38.094: INFO: (18) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname2/proxy/: bar (200; 6.412855ms) Feb 3 00:19:38.094: INFO: (18) /api/v1/namespaces/proxy-4796/services/proxy-service-m8svm:portname1/proxy/: foo (200; 6.439028ms) Feb 3 00:19:38.098: INFO: (19) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:162/proxy/: bar (200; 3.411765ms) Feb 3 00:19:38.098: INFO: (19) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj/proxy/: test (200; 3.477622ms) Feb 3 00:19:38.098: INFO: (19) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:1080/proxy/: test<... (200; 4.066833ms) Feb 3 00:19:38.099: INFO: (19) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:462/proxy/: tls qux (200; 4.385252ms) Feb 3 00:19:38.099: INFO: (19) /api/v1/namespaces/proxy-4796/pods/http:proxy-service-m8svm-p65vj:160/proxy/: foo (200; 4.448147ms) Feb 3 00:19:38.099: INFO: (19) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:460/proxy/: tls baz (200; 4.442989ms) Feb 3 00:19:38.099: INFO: (19) /api/v1/namespaces/proxy-4796/pods/https:proxy-service-m8svm-p65vj:443/proxy/: ... (200; 5.130659ms) Feb 3 00:19:38.100: INFO: (19) /api/v1/namespaces/proxy-4796/pods/proxy-service-m8svm-p65vj:160/proxy/: foo (200; 5.250456ms) STEP: deleting ReplicationController proxy-service-m8svm in namespace proxy-4796, will wait for the garbage collector to delete the pods Feb 3 00:19:38.159: INFO: Deleting ReplicationController proxy-service-m8svm took: 7.113389ms Feb 3 00:19:38.759: INFO: Terminating ReplicationController proxy-service-m8svm pods took: 600.239231ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:19:50.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4796" for this suite. • [SLOW TEST:21.551 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":309,"completed":257,"skipped":4564,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:19:50.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:19:50.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-195" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":309,"completed":258,"skipped":4575,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:19:50.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-hzlb STEP: Creating a pod to test atomic-volume-subpath Feb 3 00:19:50.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hzlb" in namespace "subpath-4804" to be "Succeeded or Failed" Feb 3 00:19:50.400: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209097ms Feb 3 00:19:52.405: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019279817s Feb 3 00:19:54.410: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 4.024289581s Feb 3 00:19:56.415: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 6.029017907s Feb 3 00:19:58.419: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 8.033444809s Feb 3 00:20:00.425: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 10.038929795s Feb 3 00:20:02.430: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 12.044143866s Feb 3 00:20:04.435: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 14.049357605s Feb 3 00:20:06.440: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 16.054514686s Feb 3 00:20:08.445: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 18.059390905s Feb 3 00:20:10.450: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 20.063954954s Feb 3 00:20:12.455: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 22.06897425s Feb 3 00:20:14.459: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Running", Reason="", readiness=true. Elapsed: 24.072875861s Feb 3 00:20:16.463: INFO: Pod "pod-subpath-test-configmap-hzlb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077580433s STEP: Saw pod success Feb 3 00:20:16.463: INFO: Pod "pod-subpath-test-configmap-hzlb" satisfied condition "Succeeded or Failed" Feb 3 00:20:16.467: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-hzlb container test-container-subpath-configmap-hzlb: STEP: delete the pod Feb 3 00:20:16.524: INFO: Waiting for pod pod-subpath-test-configmap-hzlb to disappear Feb 3 00:20:16.531: INFO: Pod pod-subpath-test-configmap-hzlb no longer exists STEP: Deleting pod pod-subpath-test-configmap-hzlb Feb 3 00:20:16.531: INFO: Deleting pod "pod-subpath-test-configmap-hzlb" in namespace "subpath-4804" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:20:16.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4804" for this suite. • [SLOW TEST:26.259 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":309,"completed":259,"skipped":4585,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:20:16.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-91e09508-3a46-493a-9279-7786970fffd6 STEP: Creating configMap with name cm-test-opt-upd-8776fb83-4308-41ec-b87d-df78b92cee6c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-91e09508-3a46-493a-9279-7786970fffd6 STEP: Updating configmap cm-test-opt-upd-8776fb83-4308-41ec-b87d-df78b92cee6c STEP: Creating configMap with name cm-test-opt-create-b409b8c1-4aa6-4e67-a825-4634ec84fa06 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:20:24.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4338" for this suite. • [SLOW TEST:8.315 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":260,"skipped":4590,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:20:24.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0203 00:20:26.607734 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 3 00:21:28.682: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:28.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5412" for this suite. • [SLOW TEST:63.838 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":309,"completed":261,"skipped":4603,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:28.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 3 00:21:28.809: INFO: Waiting up to 5m0s for pod "pod-12093208-607f-463d-a046-6a0b8702e1bd" in namespace "emptydir-7759" to be "Succeeded or Failed" Feb 3 00:21:28.824: INFO: Pod "pod-12093208-607f-463d-a046-6a0b8702e1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 14.577068ms Feb 3 00:21:30.828: INFO: Pod "pod-12093208-607f-463d-a046-6a0b8702e1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018781434s Feb 3 00:21:32.832: INFO: Pod "pod-12093208-607f-463d-a046-6a0b8702e1bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.022712071s Feb 3 00:21:34.836: INFO: Pod "pod-12093208-607f-463d-a046-6a0b8702e1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026688675s STEP: Saw pod success Feb 3 00:21:34.836: INFO: Pod "pod-12093208-607f-463d-a046-6a0b8702e1bd" satisfied condition "Succeeded or Failed" Feb 3 00:21:34.839: INFO: Trying to get logs from node leguer-worker pod pod-12093208-607f-463d-a046-6a0b8702e1bd container test-container: STEP: delete the pod Feb 3 00:21:34.858: INFO: Waiting for pod pod-12093208-607f-463d-a046-6a0b8702e1bd to disappear Feb 3 00:21:34.879: INFO: Pod pod-12093208-607f-463d-a046-6a0b8702e1bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:34.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7759" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":262,"skipped":4624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:34.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:34.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4881" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":309,"completed":263,"skipped":4669,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:34.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:35.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-741" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":309,"completed":264,"skipped":4673,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:35.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating Pod STEP: Reading file content from the nginx-container Feb 3 00:21:41.422: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7819 PodName:pod-sharedvolume-83d4c51b-96ac-454f-87ed-3aa5dedc7667 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:21:41.422: INFO: >>> kubeConfig: /root/.kube/config I0203 00:21:41.457968 7 log.go:181] (0xc000e286e0) (0xc0051dd9a0) Create stream I0203 00:21:41.458005 7 log.go:181] (0xc000e286e0) (0xc0051dd9a0) Stream added, broadcasting: 1 I0203 00:21:41.460370 7 log.go:181] (0xc000e286e0) Reply frame received for 1 I0203 00:21:41.460434 7 log.go:181] (0xc000e286e0) (0xc002ad2000) Create stream I0203 00:21:41.460463 7 log.go:181] (0xc000e286e0) (0xc002ad2000) Stream added, broadcasting: 3 I0203 00:21:41.461570 7 log.go:181] (0xc000e286e0) Reply frame received for 3 I0203 00:21:41.461629 7 log.go:181] (0xc000e286e0) (0xc002ad20a0) Create stream I0203 00:21:41.461647 7 log.go:181] (0xc000e286e0) (0xc002ad20a0) Stream added, broadcasting: 5 I0203 00:21:41.462741 7 log.go:181] (0xc000e286e0) Reply frame received for 5 I0203 00:21:41.556686 7 log.go:181] (0xc000e286e0) Data frame received for 5 I0203 00:21:41.556760 7 log.go:181] (0xc002ad20a0) (5) Data frame handling I0203 00:21:41.556799 7 log.go:181] (0xc000e286e0) Data frame received for 3 I0203 00:21:41.556820 7 log.go:181] (0xc002ad2000) (3) Data frame handling I0203 00:21:41.556939 7 log.go:181] (0xc002ad2000) (3) Data frame sent I0203 00:21:41.556967 7 log.go:181] (0xc000e286e0) Data frame received for 3 I0203 00:21:41.556989 7 log.go:181] (0xc002ad2000) (3) Data frame handling I0203 00:21:41.557937 7 log.go:181] (0xc000e286e0) Data frame received for 1 I0203 00:21:41.557958 7 log.go:181] (0xc0051dd9a0) (1) Data frame handling I0203 00:21:41.557976 7 log.go:181] (0xc0051dd9a0) (1) Data frame sent I0203 00:21:41.557994 7 log.go:181] (0xc000e286e0) (0xc0051dd9a0) Stream removed, broadcasting: 1 I0203 00:21:41.558070 7 log.go:181] (0xc000e286e0) (0xc0051dd9a0) Stream removed, broadcasting: 1 I0203 00:21:41.558081 7 log.go:181] (0xc000e286e0) (0xc002ad2000) Stream removed, broadcasting: 3 I0203 00:21:41.558089 7 log.go:181] (0xc000e286e0) (0xc002ad20a0) Stream removed, broadcasting: 5 Feb 3 00:21:41.558: INFO: Exec stderr: "" I0203 00:21:41.558108 7 log.go:181] (0xc000e286e0) Go away received [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:41.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7819" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":309,"completed":265,"skipped":4677,"failed":0} SSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:41.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Feb 3 00:21:41.681: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Feb 3 00:21:41.686: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Feb 3 00:21:41.686: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Feb 3 00:21:41.693: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Feb 3 00:21:41.693: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Feb 3 00:21:41.753: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Feb 3 00:21:41.753: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Feb 3 00:21:49.243: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:49.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5477" for this suite. • [SLOW TEST:7.754 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":309,"completed":266,"skipped":4681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:49.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 3 00:21:49.503: INFO: Waiting up to 5m0s for pod "pod-5789c352-e83f-4957-83e1-01b125a51aa6" in namespace "emptydir-8586" to be "Succeeded or Failed" Feb 3 00:21:49.515: INFO: Pod "pod-5789c352-e83f-4957-83e1-01b125a51aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.740059ms Feb 3 00:21:51.604: INFO: Pod "pod-5789c352-e83f-4957-83e1-01b125a51aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100696734s Feb 3 00:21:53.608: INFO: Pod "pod-5789c352-e83f-4957-83e1-01b125a51aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104914695s Feb 3 00:21:55.627: INFO: Pod "pod-5789c352-e83f-4957-83e1-01b125a51aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12353008s STEP: Saw pod success Feb 3 00:21:55.627: INFO: Pod "pod-5789c352-e83f-4957-83e1-01b125a51aa6" satisfied condition "Succeeded or Failed" Feb 3 00:21:55.633: INFO: Trying to get logs from node leguer-worker pod pod-5789c352-e83f-4957-83e1-01b125a51aa6 container test-container: STEP: delete the pod Feb 3 00:21:56.118: INFO: Waiting for pod pod-5789c352-e83f-4957-83e1-01b125a51aa6 to disappear Feb 3 00:21:56.153: INFO: Pod pod-5789c352-e83f-4957-83e1-01b125a51aa6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:21:56.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8586" for this suite. • [SLOW TEST:6.841 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":267,"skipped":4707,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:21:56.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Feb 3 00:21:56.600: INFO: Waiting up to 5m0s for pod "downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26" in namespace "downward-api-794" to be "Succeeded or Failed" Feb 3 00:21:56.657: INFO: Pod "downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26": Phase="Pending", Reason="", readiness=false. Elapsed: 56.886862ms Feb 3 00:21:58.663: INFO: Pod "downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062322243s Feb 3 00:22:00.667: INFO: Pod "downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066704349s STEP: Saw pod success Feb 3 00:22:00.667: INFO: Pod "downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26" satisfied condition "Succeeded or Failed" Feb 3 00:22:00.670: INFO: Trying to get logs from node leguer-worker pod downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26 container dapi-container: STEP: delete the pod Feb 3 00:22:00.722: INFO: Waiting for pod downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26 to disappear Feb 3 00:22:00.735: INFO: Pod downward-api-39f2cfe9-d0dc-4ccf-9dcf-340958dbea26 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:22:00.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-794" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":309,"completed":268,"skipped":4710,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:22:00.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's command Feb 3 00:22:00.861: INFO: Waiting up to 5m0s for pod "var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893" in namespace "var-expansion-9221" to be "Succeeded or Failed" Feb 3 00:22:00.871: INFO: Pod "var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893": Phase="Pending", Reason="", readiness=false. Elapsed: 9.786204ms Feb 3 00:22:02.875: INFO: Pod "var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013959194s Feb 3 00:22:04.880: INFO: Pod "var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018705899s STEP: Saw pod success Feb 3 00:22:04.880: INFO: Pod "var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893" satisfied condition "Succeeded or Failed" Feb 3 00:22:04.883: INFO: Trying to get logs from node leguer-worker pod var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893 container dapi-container: STEP: delete the pod Feb 3 00:22:04.935: INFO: Waiting for pod var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893 to disappear Feb 3 00:22:04.941: INFO: Pod var-expansion-91938fe3-d1f0-42af-b268-19dae50a9893 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:22:04.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9221" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":309,"completed":269,"skipped":4720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:22:04.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating server pod server in namespace prestop-6898 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6898 STEP: Deleting pre-stop pod Feb 3 00:22:18.126: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:22:18.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6898" for this suite. • [SLOW TEST:13.326 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":309,"completed":270,"skipped":4744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:22:18.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:22:29.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5294" for this suite. • [SLOW TEST:11.541 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":309,"completed":271,"skipped":4793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:22:29.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 00:22:34.328: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:22:34.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6916" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":272,"skipped":4819,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:22:34.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:22:34.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-426" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":309,"completed":273,"skipped":4821,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:22:34.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 3 00:22:34.900: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:34.903: INFO: Number of nodes with available pods: 0 Feb 3 00:22:34.903: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:35.908: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:35.911: INFO: Number of nodes with available pods: 0 Feb 3 00:22:35.911: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:36.908: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:36.912: INFO: Number of nodes with available pods: 0 Feb 3 00:22:36.912: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:37.949: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:37.963: INFO: Number of nodes with available pods: 0 Feb 3 00:22:37.963: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:38.909: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:38.913: INFO: Number of nodes with available pods: 1 Feb 3 00:22:38.913: INFO: Node leguer-worker2 is running more than one daemon pod Feb 3 00:22:39.907: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:39.910: INFO: Number of nodes with available pods: 2 Feb 3 00:22:39.910: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 3 00:22:39.983: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:40.020: INFO: Number of nodes with available pods: 1 Feb 3 00:22:40.020: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:41.026: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:41.029: INFO: Number of nodes with available pods: 1 Feb 3 00:22:41.029: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:42.174: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:42.176: INFO: Number of nodes with available pods: 1 Feb 3 00:22:42.176: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:43.026: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:43.030: INFO: Number of nodes with available pods: 1 Feb 3 00:22:43.030: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:22:44.032: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:22:44.044: INFO: Number of nodes with available pods: 2 Feb 3 00:22:44.044: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3098, will wait for the garbage collector to delete the pods Feb 3 00:22:44.109: INFO: Deleting DaemonSet.extensions daemon-set took: 6.52003ms Feb 3 00:22:44.809: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.264898ms Feb 3 00:23:30.212: INFO: Number of nodes with available pods: 0 Feb 3 00:23:30.212: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 00:23:30.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4194612"},"items":null} Feb 3 00:23:30.217: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4194612"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:23:30.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3098" for this suite. • [SLOW TEST:55.560 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":309,"completed":274,"skipped":4824,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:23:30.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:23:30.312: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 3 00:23:33.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5994 --namespace=crd-publish-openapi-5994 create -f -' Feb 3 00:23:37.641: INFO: stderr: "" Feb 3 00:23:37.641: INFO: stdout: "e2e-test-crd-publish-openapi-1440-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 3 00:23:37.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5994 --namespace=crd-publish-openapi-5994 delete e2e-test-crd-publish-openapi-1440-crds test-cr' Feb 3 00:23:37.814: INFO: stderr: "" Feb 3 00:23:37.814: INFO: stdout: "e2e-test-crd-publish-openapi-1440-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 3 00:23:37.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5994 --namespace=crd-publish-openapi-5994 apply -f -' Feb 3 00:23:38.150: INFO: stderr: "" Feb 3 00:23:38.150: INFO: stdout: "e2e-test-crd-publish-openapi-1440-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 3 00:23:38.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5994 --namespace=crd-publish-openapi-5994 delete e2e-test-crd-publish-openapi-1440-crds test-cr' Feb 3 00:23:38.270: INFO: stderr: "" Feb 3 00:23:38.270: INFO: stdout: "e2e-test-crd-publish-openapi-1440-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 3 00:23:38.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5994 explain e2e-test-crd-publish-openapi-1440-crds' Feb 3 00:23:38.571: INFO: stderr: "" Feb 3 00:23:38.571: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1440-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:23:42.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5994" for this suite. • [SLOW TEST:11.897 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":309,"completed":275,"skipped":4837,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:23:42.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-29262f7b-7500-41e7-88b6-77faa551db8c in namespace container-probe-90 Feb 3 00:23:46.281: INFO: Started pod liveness-29262f7b-7500-41e7-88b6-77faa551db8c in namespace container-probe-90 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 00:23:46.285: INFO: Initial restart count of pod liveness-29262f7b-7500-41e7-88b6-77faa551db8c is 0 Feb 3 00:24:12.352: INFO: Restart count of pod container-probe-90/liveness-29262f7b-7500-41e7-88b6-77faa551db8c is now 1 (26.066849748s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:24:12.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-90" for this suite. • [SLOW TEST:30.272 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":276,"skipped":4838,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:24:12.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 3 00:24:13.905: INFO: Pod name wrapped-volume-race-6a5ccdb8-b577-484a-9fcb-9ea3ab731219: Found 0 pods out of 5 Feb 3 00:24:18.919: INFO: Pod name wrapped-volume-race-6a5ccdb8-b577-484a-9fcb-9ea3ab731219: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6a5ccdb8-b577-484a-9fcb-9ea3ab731219 in namespace emptydir-wrapper-538, will wait for the garbage collector to delete the pods Feb 3 00:24:35.048: INFO: Deleting ReplicationController wrapped-volume-race-6a5ccdb8-b577-484a-9fcb-9ea3ab731219 took: 48.003781ms Feb 3 00:24:35.648: INFO: Terminating ReplicationController wrapped-volume-race-6a5ccdb8-b577-484a-9fcb-9ea3ab731219 pods took: 600.328147ms STEP: Creating RC which spawns configmap-volume pods Feb 3 00:25:20.189: INFO: Pod name wrapped-volume-race-0a558d0b-485a-42a8-8dc5-8aeaeea57a5e: Found 0 pods out of 5 Feb 3 00:25:25.197: INFO: Pod name wrapped-volume-race-0a558d0b-485a-42a8-8dc5-8aeaeea57a5e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0a558d0b-485a-42a8-8dc5-8aeaeea57a5e in namespace emptydir-wrapper-538, will wait for the garbage collector to delete the pods Feb 3 00:25:41.283: INFO: Deleting ReplicationController wrapped-volume-race-0a558d0b-485a-42a8-8dc5-8aeaeea57a5e took: 7.612454ms Feb 3 00:25:41.883: INFO: Terminating ReplicationController wrapped-volume-race-0a558d0b-485a-42a8-8dc5-8aeaeea57a5e pods took: 600.228386ms STEP: Creating RC which spawns configmap-volume pods Feb 3 00:26:30.428: INFO: Pod name wrapped-volume-race-02450871-e107-4d9a-bfb5-36e9e224bcb3: Found 0 pods out of 5 Feb 3 00:26:35.435: INFO: Pod name wrapped-volume-race-02450871-e107-4d9a-bfb5-36e9e224bcb3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-02450871-e107-4d9a-bfb5-36e9e224bcb3 in namespace emptydir-wrapper-538, will wait for the garbage collector to delete the pods Feb 3 00:26:49.520: INFO: Deleting ReplicationController wrapped-volume-race-02450871-e107-4d9a-bfb5-36e9e224bcb3 took: 8.186637ms Feb 3 00:26:50.120: INFO: Terminating ReplicationController wrapped-volume-race-02450871-e107-4d9a-bfb5-36e9e224bcb3 pods took: 600.337618ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:27:41.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-538" for this suite. • [SLOW TEST:208.643 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":309,"completed":277,"skipped":4851,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:27:41.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Feb 3 00:27:41.166: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:27:49.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7093" for this suite. • [SLOW TEST:8.159 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":309,"completed":278,"skipped":4851,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:27:49.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-7267 Feb 3 00:27:53.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Feb 3 00:27:53.821: INFO: stderr: "I0203 00:27:53.728179 3911 log.go:181] (0xc000c86000) (0xc0006da000) Create stream\nI0203 00:27:53.728266 3911 log.go:181] (0xc000c86000) (0xc0006da000) Stream added, broadcasting: 1\nI0203 00:27:53.730304 3911 log.go:181] (0xc000c86000) Reply frame received for 1\nI0203 00:27:53.730391 3911 log.go:181] (0xc000c86000) (0xc0006ac780) Create stream\nI0203 00:27:53.730425 3911 log.go:181] (0xc000c86000) (0xc0006ac780) Stream added, broadcasting: 3\nI0203 00:27:53.731290 3911 log.go:181] (0xc000c86000) Reply frame received for 3\nI0203 00:27:53.731318 3911 log.go:181] (0xc000c86000) (0xc0006da0a0) Create stream\nI0203 00:27:53.731330 3911 log.go:181] (0xc000c86000) (0xc0006da0a0) Stream added, broadcasting: 5\nI0203 00:27:53.732158 3911 log.go:181] (0xc000c86000) Reply frame received for 5\nI0203 00:27:53.808352 3911 log.go:181] (0xc000c86000) Data frame received for 5\nI0203 00:27:53.808373 3911 log.go:181] (0xc0006da0a0) (5) Data frame handling\nI0203 00:27:53.808385 3911 log.go:181] (0xc0006da0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0203 00:27:53.811245 3911 log.go:181] (0xc000c86000) Data frame received for 3\nI0203 00:27:53.811272 3911 log.go:181] (0xc0006ac780) (3) Data frame handling\nI0203 00:27:53.811295 3911 log.go:181] (0xc0006ac780) (3) Data frame sent\nI0203 00:27:53.811608 3911 log.go:181] (0xc000c86000) Data frame received for 5\nI0203 00:27:53.811628 3911 log.go:181] (0xc0006da0a0) (5) Data frame handling\nI0203 00:27:53.811699 3911 log.go:181] (0xc000c86000) Data frame received for 3\nI0203 00:27:53.811719 3911 log.go:181] (0xc0006ac780) (3) Data frame handling\nI0203 00:27:53.816084 3911 log.go:181] (0xc000c86000) Data frame received for 1\nI0203 00:27:53.816110 3911 log.go:181] (0xc0006da000) (1) Data frame handling\nI0203 00:27:53.816124 3911 log.go:181] (0xc0006da000) (1) Data frame sent\nI0203 00:27:53.816141 3911 log.go:181] (0xc000c86000) (0xc0006da000) Stream removed, broadcasting: 1\nI0203 00:27:53.816166 3911 log.go:181] (0xc000c86000) Go away received\nI0203 00:27:53.816733 3911 log.go:181] (0xc000c86000) (0xc0006da000) Stream removed, broadcasting: 1\nI0203 00:27:53.816764 3911 log.go:181] (0xc000c86000) (0xc0006ac780) Stream removed, broadcasting: 3\nI0203 00:27:53.816785 3911 log.go:181] (0xc000c86000) (0xc0006da0a0) Stream removed, broadcasting: 5\n" Feb 3 00:27:53.821: INFO: stdout: "iptables" Feb 3 00:27:53.821: INFO: proxyMode: iptables Feb 3 00:27:53.853: INFO: Waiting for pod kube-proxy-mode-detector to disappear Feb 3 00:27:53.865: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7267 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7267 I0203 00:27:53.944754 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7267, replica count: 3 I0203 00:27:56.995354 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0203 00:27:59.995618 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 3 00:28:00.007: INFO: Creating new exec pod Feb 3 00:28:05.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Feb 3 00:28:05.272: INFO: stderr: "I0203 00:28:05.172765 3929 log.go:181] (0xc00003a420) (0xc0003b6000) Create stream\nI0203 00:28:05.172907 3929 log.go:181] (0xc00003a420) (0xc0003b6000) Stream added, broadcasting: 1\nI0203 00:28:05.174739 3929 log.go:181] (0xc00003a420) Reply frame received for 1\nI0203 00:28:05.174774 3929 log.go:181] (0xc00003a420) (0xc0000cd680) Create stream\nI0203 00:28:05.174790 3929 log.go:181] (0xc00003a420) (0xc0000cd680) Stream added, broadcasting: 3\nI0203 00:28:05.175657 3929 log.go:181] (0xc00003a420) Reply frame received for 3\nI0203 00:28:05.175694 3929 log.go:181] (0xc00003a420) (0xc00048b540) Create stream\nI0203 00:28:05.175704 3929 log.go:181] (0xc00003a420) (0xc00048b540) Stream added, broadcasting: 5\nI0203 00:28:05.176537 3929 log.go:181] (0xc00003a420) Reply frame received for 5\nI0203 00:28:05.263690 3929 log.go:181] (0xc00003a420) Data frame received for 5\nI0203 00:28:05.263735 3929 log.go:181] (0xc00048b540) (5) Data frame handling\nI0203 00:28:05.263767 3929 log.go:181] (0xc00048b540) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0203 00:28:05.264579 3929 log.go:181] (0xc00003a420) Data frame received for 3\nI0203 00:28:05.264607 3929 log.go:181] (0xc0000cd680) (3) Data frame handling\nI0203 00:28:05.264631 3929 log.go:181] (0xc00003a420) Data frame received for 5\nI0203 00:28:05.264642 3929 log.go:181] (0xc00048b540) (5) Data frame handling\nI0203 00:28:05.264654 3929 log.go:181] (0xc00048b540) (5) Data frame sent\nI0203 00:28:05.264663 3929 log.go:181] (0xc00003a420) Data frame received for 5\nI0203 00:28:05.264669 3929 log.go:181] (0xc00048b540) (5) Data frame handling\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0203 00:28:05.266271 3929 log.go:181] (0xc00003a420) Data frame received for 1\nI0203 00:28:05.266292 3929 log.go:181] (0xc0003b6000) (1) Data frame handling\nI0203 00:28:05.266302 3929 log.go:181] (0xc0003b6000) (1) Data frame sent\nI0203 00:28:05.266315 3929 log.go:181] (0xc00003a420) (0xc0003b6000) Stream removed, broadcasting: 1\nI0203 00:28:05.266424 3929 log.go:181] (0xc00003a420) Go away received\nI0203 00:28:05.266674 3929 log.go:181] (0xc00003a420) (0xc0003b6000) Stream removed, broadcasting: 1\nI0203 00:28:05.266691 3929 log.go:181] (0xc00003a420) (0xc0000cd680) Stream removed, broadcasting: 3\nI0203 00:28:05.266698 3929 log.go:181] (0xc00003a420) (0xc00048b540) Stream removed, broadcasting: 5\n" Feb 3 00:28:05.272: INFO: stdout: "" Feb 3 00:28:05.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c nc -zv -t -w 2 10.96.17.44 80' Feb 3 00:28:05.507: INFO: stderr: "I0203 00:28:05.411531 3947 log.go:181] (0xc000560000) (0xc000c561e0) Create stream\nI0203 00:28:05.411605 3947 log.go:181] (0xc000560000) (0xc000c561e0) Stream added, broadcasting: 1\nI0203 00:28:05.414200 3947 log.go:181] (0xc000560000) Reply frame received for 1\nI0203 00:28:05.414254 3947 log.go:181] (0xc000560000) (0xc000315e00) Create stream\nI0203 00:28:05.414269 3947 log.go:181] (0xc000560000) (0xc000315e00) Stream added, broadcasting: 3\nI0203 00:28:05.415031 3947 log.go:181] (0xc000560000) Reply frame received for 3\nI0203 00:28:05.415062 3947 log.go:181] (0xc000560000) (0xc000c56280) Create stream\nI0203 00:28:05.415071 3947 log.go:181] (0xc000560000) (0xc000c56280) Stream added, broadcasting: 5\nI0203 00:28:05.415867 3947 log.go:181] (0xc000560000) Reply frame received for 5\nI0203 00:28:05.498962 3947 log.go:181] (0xc000560000) Data frame received for 3\nI0203 00:28:05.499010 3947 log.go:181] (0xc000315e00) (3) Data frame handling\nI0203 00:28:05.499031 3947 log.go:181] (0xc000560000) Data frame received for 5\nI0203 00:28:05.499056 3947 log.go:181] (0xc000c56280) (5) Data frame handling\nI0203 00:28:05.499078 3947 log.go:181] (0xc000c56280) (5) Data frame sent\nI0203 00:28:05.499110 3947 log.go:181] (0xc000560000) Data frame received for 5\nI0203 00:28:05.499136 3947 log.go:181] (0xc000c56280) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.17.44 80\nConnection to 10.96.17.44 80 port [tcp/http] succeeded!\nI0203 00:28:05.500769 3947 log.go:181] (0xc000560000) Data frame received for 1\nI0203 00:28:05.500796 3947 log.go:181] (0xc000c561e0) (1) Data frame handling\nI0203 00:28:05.500813 3947 log.go:181] (0xc000c561e0) (1) Data frame sent\nI0203 00:28:05.500910 3947 log.go:181] (0xc000560000) (0xc000c561e0) Stream removed, broadcasting: 1\nI0203 00:28:05.501310 3947 log.go:181] (0xc000560000) (0xc000c561e0) Stream removed, broadcasting: 1\nI0203 00:28:05.501328 3947 log.go:181] (0xc000560000) (0xc000315e00) Stream removed, broadcasting: 3\nI0203 00:28:05.501338 3947 log.go:181] (0xc000560000) (0xc000c56280) Stream removed, broadcasting: 5\n" Feb 3 00:28:05.507: INFO: stdout: "" Feb 3 00:28:05.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31097' Feb 3 00:28:05.713: INFO: stderr: "I0203 00:28:05.636546 3965 log.go:181] (0xc00087a210) (0xc0008a1680) Create stream\nI0203 00:28:05.636612 3965 log.go:181] (0xc00087a210) (0xc0008a1680) Stream added, broadcasting: 1\nI0203 00:28:05.641527 3965 log.go:181] (0xc00087a210) Reply frame received for 1\nI0203 00:28:05.641579 3965 log.go:181] (0xc00087a210) (0xc0005503c0) Create stream\nI0203 00:28:05.641597 3965 log.go:181] (0xc00087a210) (0xc0005503c0) Stream added, broadcasting: 3\nI0203 00:28:05.643911 3965 log.go:181] (0xc00087a210) Reply frame received for 3\nI0203 00:28:05.643937 3965 log.go:181] (0xc00087a210) (0xc0008a1900) Create stream\nI0203 00:28:05.643945 3965 log.go:181] (0xc00087a210) (0xc0008a1900) Stream added, broadcasting: 5\nI0203 00:28:05.644693 3965 log.go:181] (0xc00087a210) Reply frame received for 5\nI0203 00:28:05.705098 3965 log.go:181] (0xc00087a210) Data frame received for 5\nI0203 00:28:05.705132 3965 log.go:181] (0xc0008a1900) (5) Data frame handling\nI0203 00:28:05.705146 3965 log.go:181] (0xc0008a1900) (5) Data frame sent\nI0203 00:28:05.705156 3965 log.go:181] (0xc00087a210) Data frame received for 5\nI0203 00:28:05.705164 3965 log.go:181] (0xc0008a1900) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31097\nConnection to 172.18.0.13 31097 port [tcp/31097] succeeded!\nI0203 00:28:05.705188 3965 log.go:181] (0xc0008a1900) (5) Data frame sent\nI0203 00:28:05.705197 3965 log.go:181] (0xc00087a210) Data frame received for 5\nI0203 00:28:05.705204 3965 log.go:181] (0xc0008a1900) (5) Data frame handling\nI0203 00:28:05.705466 3965 log.go:181] (0xc00087a210) Data frame received for 3\nI0203 00:28:05.705498 3965 log.go:181] (0xc0005503c0) (3) Data frame handling\nI0203 00:28:05.706832 3965 log.go:181] (0xc00087a210) Data frame received for 1\nI0203 00:28:05.706864 3965 log.go:181] (0xc0008a1680) (1) Data frame handling\nI0203 00:28:05.706885 3965 log.go:181] (0xc0008a1680) (1) Data frame sent\nI0203 00:28:05.706911 3965 log.go:181] (0xc00087a210) (0xc0008a1680) Stream removed, broadcasting: 1\nI0203 00:28:05.706933 3965 log.go:181] (0xc00087a210) Go away received\nI0203 00:28:05.707473 3965 log.go:181] (0xc00087a210) (0xc0008a1680) Stream removed, broadcasting: 1\nI0203 00:28:05.707495 3965 log.go:181] (0xc00087a210) (0xc0005503c0) Stream removed, broadcasting: 3\nI0203 00:28:05.707510 3965 log.go:181] (0xc00087a210) (0xc0008a1900) Stream removed, broadcasting: 5\n" Feb 3 00:28:05.713: INFO: stdout: "" Feb 3 00:28:05.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31097' Feb 3 00:28:05.907: INFO: stderr: "I0203 00:28:05.836507 3983 log.go:181] (0xc00055d600) (0xc000554960) Create stream\nI0203 00:28:05.836574 3983 log.go:181] (0xc00055d600) (0xc000554960) Stream added, broadcasting: 1\nI0203 00:28:05.838388 3983 log.go:181] (0xc00055d600) Reply frame received for 1\nI0203 00:28:05.838446 3983 log.go:181] (0xc00055d600) (0xc000c0a0a0) Create stream\nI0203 00:28:05.838469 3983 log.go:181] (0xc00055d600) (0xc000c0a0a0) Stream added, broadcasting: 3\nI0203 00:28:05.839250 3983 log.go:181] (0xc00055d600) Reply frame received for 3\nI0203 00:28:05.839284 3983 log.go:181] (0xc00055d600) (0xc000d0c0a0) Create stream\nI0203 00:28:05.839302 3983 log.go:181] (0xc00055d600) (0xc000d0c0a0) Stream added, broadcasting: 5\nI0203 00:28:05.840401 3983 log.go:181] (0xc00055d600) Reply frame received for 5\nI0203 00:28:05.898315 3983 log.go:181] (0xc00055d600) Data frame received for 5\nI0203 00:28:05.898350 3983 log.go:181] (0xc000d0c0a0) (5) Data frame handling\nI0203 00:28:05.898365 3983 log.go:181] (0xc000d0c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31097\nConnection to 172.18.0.12 31097 port [tcp/31097] succeeded!\nI0203 00:28:05.898393 3983 log.go:181] (0xc00055d600) Data frame received for 3\nI0203 00:28:05.898433 3983 log.go:181] (0xc000c0a0a0) (3) Data frame handling\nI0203 00:28:05.898476 3983 log.go:181] (0xc00055d600) Data frame received for 5\nI0203 00:28:05.898512 3983 log.go:181] (0xc000d0c0a0) (5) Data frame handling\nI0203 00:28:05.900313 3983 log.go:181] (0xc00055d600) Data frame received for 1\nI0203 00:28:05.900361 3983 log.go:181] (0xc000554960) (1) Data frame handling\nI0203 00:28:05.900383 3983 log.go:181] (0xc000554960) (1) Data frame sent\nI0203 00:28:05.900429 3983 log.go:181] (0xc00055d600) (0xc000554960) Stream removed, broadcasting: 1\nI0203 00:28:05.900466 3983 log.go:181] (0xc00055d600) Go away received\nI0203 00:28:05.901022 3983 log.go:181] (0xc00055d600) (0xc000554960) Stream removed, broadcasting: 1\nI0203 00:28:05.901050 3983 log.go:181] (0xc00055d600) (0xc000c0a0a0) Stream removed, broadcasting: 3\nI0203 00:28:05.901081 3983 log.go:181] (0xc00055d600) (0xc000d0c0a0) Stream removed, broadcasting: 5\n" Feb 3 00:28:05.907: INFO: stdout: "" Feb 3 00:28:05.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:31097/ ; done' Feb 3 00:28:06.223: INFO: stderr: "I0203 00:28:06.039877 4001 log.go:181] (0xc00064d340) (0xc0006d8be0) Create stream\nI0203 00:28:06.039923 4001 log.go:181] (0xc00064d340) (0xc0006d8be0) Stream added, broadcasting: 1\nI0203 00:28:06.045452 4001 log.go:181] (0xc00064d340) Reply frame received for 1\nI0203 00:28:06.045494 4001 log.go:181] (0xc00064d340) (0xc000ad2000) Create stream\nI0203 00:28:06.045506 4001 log.go:181] (0xc00064d340) (0xc000ad2000) Stream added, broadcasting: 3\nI0203 00:28:06.046493 4001 log.go:181] (0xc00064d340) Reply frame received for 3\nI0203 00:28:06.046538 4001 log.go:181] (0xc00064d340) (0xc0006d8000) Create stream\nI0203 00:28:06.046552 4001 log.go:181] (0xc00064d340) (0xc0006d8000) Stream added, broadcasting: 5\nI0203 00:28:06.047495 4001 log.go:181] (0xc00064d340) Reply frame received for 5\nI0203 00:28:06.122149 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.122197 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.122213 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.122244 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.122255 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.122278 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.127953 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.127979 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.127993 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.128830 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.128974 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.128996 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.129023 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.129034 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.129047 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.136092 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.136122 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.136142 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.137300 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.137344 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.137373 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.137407 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.137427 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.137456 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.142487 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.142511 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.142531 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.142976 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.143002 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.143015 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.143035 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.143051 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.143069 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.148330 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.148348 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.148359 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.149032 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.149050 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.149071 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.149101 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.149118 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.149143 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\nI0203 00:28:06.154603 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.154623 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.154747 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.155104 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.155125 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.155135 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.155146 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.155166 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.155200 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.159511 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.159533 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.159551 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.160374 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.160399 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.160423 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.160434 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.160449 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.160474 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.164425 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.164451 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.164462 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.165161 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.165179 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.165192 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.165323 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.165339 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.165363 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.171366 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.171401 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.171438 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.171897 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.171927 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.171937 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.171952 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.171961 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.171969 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.176583 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.176604 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.176614 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.177135 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.177150 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.177158 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.177179 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.177194 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.177205 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.180198 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.180218 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.180243 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.180524 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.180570 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.180584 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.180600 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.180616 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.180630 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.187784 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.187808 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.187825 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.188552 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.188590 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.188620 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.188661 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.188693 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.188712 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.193234 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.193246 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.193252 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.193913 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.193928 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.193937 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.193964 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.193984 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.194002 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.197627 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.197639 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.197645 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.198446 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.198456 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.198462 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.198580 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.198610 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.198633 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.203884 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.203903 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.203913 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.204510 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.204554 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.204584 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.204613 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.204631 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.204639 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.209973 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.209999 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.210029 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.210674 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.210703 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.210719 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ echo\nI0203 00:28:06.210745 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.210797 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.210814 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.210835 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.210858 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.210887 4001 log.go:181] (0xc0006d8000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.214516 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.214534 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.214552 4001 log.go:181] (0xc000ad2000) (3) Data frame sent\nI0203 00:28:06.214996 4001 log.go:181] (0xc00064d340) Data frame received for 3\nI0203 00:28:06.215017 4001 log.go:181] (0xc000ad2000) (3) Data frame handling\nI0203 00:28:06.215053 4001 log.go:181] (0xc00064d340) Data frame received for 5\nI0203 00:28:06.215078 4001 log.go:181] (0xc0006d8000) (5) Data frame handling\nI0203 00:28:06.217090 4001 log.go:181] (0xc00064d340) Data frame received for 1\nI0203 00:28:06.217122 4001 log.go:181] (0xc0006d8be0) (1) Data frame handling\nI0203 00:28:06.217146 4001 log.go:181] (0xc0006d8be0) (1) Data frame sent\nI0203 00:28:06.217167 4001 log.go:181] (0xc00064d340) (0xc0006d8be0) Stream removed, broadcasting: 1\nI0203 00:28:06.217212 4001 log.go:181] (0xc00064d340) Go away received\nI0203 00:28:06.217725 4001 log.go:181] (0xc00064d340) (0xc0006d8be0) Stream removed, broadcasting: 1\nI0203 00:28:06.217762 4001 log.go:181] (0xc00064d340) (0xc000ad2000) Stream removed, broadcasting: 3\nI0203 00:28:06.217775 4001 log.go:181] (0xc00064d340) (0xc0006d8000) Stream removed, broadcasting: 5\n" Feb 3 00:28:06.223: INFO: stdout: "\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw\naffinity-nodeport-timeout-dcjqw" Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Received response from host: affinity-nodeport-timeout-dcjqw Feb 3 00:28:06.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:31097/' Feb 3 00:28:06.429: INFO: stderr: "I0203 00:28:06.352808 4019 log.go:181] (0xc000194370) (0xc000b22000) Create stream\nI0203 00:28:06.353026 4019 log.go:181] (0xc000194370) (0xc000b22000) Stream added, broadcasting: 1\nI0203 00:28:06.354902 4019 log.go:181] (0xc000194370) Reply frame received for 1\nI0203 00:28:06.354948 4019 log.go:181] (0xc000194370) (0xc00020f900) Create stream\nI0203 00:28:06.354967 4019 log.go:181] (0xc000194370) (0xc00020f900) Stream added, broadcasting: 3\nI0203 00:28:06.355842 4019 log.go:181] (0xc000194370) Reply frame received for 3\nI0203 00:28:06.355888 4019 log.go:181] (0xc000194370) (0xc00020fb80) Create stream\nI0203 00:28:06.355905 4019 log.go:181] (0xc000194370) (0xc00020fb80) Stream added, broadcasting: 5\nI0203 00:28:06.356601 4019 log.go:181] (0xc000194370) Reply frame received for 5\nI0203 00:28:06.414468 4019 log.go:181] (0xc000194370) Data frame received for 5\nI0203 00:28:06.414491 4019 log.go:181] (0xc00020fb80) (5) Data frame handling\nI0203 00:28:06.414505 4019 log.go:181] (0xc00020fb80) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:06.420610 4019 log.go:181] (0xc000194370) Data frame received for 3\nI0203 00:28:06.420636 4019 log.go:181] (0xc00020f900) (3) Data frame handling\nI0203 00:28:06.420654 4019 log.go:181] (0xc00020f900) (3) Data frame sent\nI0203 00:28:06.421322 4019 log.go:181] (0xc000194370) Data frame received for 3\nI0203 00:28:06.421357 4019 log.go:181] (0xc00020f900) (3) Data frame handling\nI0203 00:28:06.421681 4019 log.go:181] (0xc000194370) Data frame received for 5\nI0203 00:28:06.421702 4019 log.go:181] (0xc00020fb80) (5) Data frame handling\nI0203 00:28:06.423411 4019 log.go:181] (0xc000194370) Data frame received for 1\nI0203 00:28:06.423428 4019 log.go:181] (0xc000b22000) (1) Data frame handling\nI0203 00:28:06.423440 4019 log.go:181] (0xc000b22000) (1) Data frame sent\nI0203 00:28:06.423451 4019 log.go:181] (0xc000194370) (0xc000b22000) Stream removed, broadcasting: 1\nI0203 00:28:06.423505 4019 log.go:181] (0xc000194370) Go away received\nI0203 00:28:06.423720 4019 log.go:181] (0xc000194370) (0xc000b22000) Stream removed, broadcasting: 1\nI0203 00:28:06.423734 4019 log.go:181] (0xc000194370) (0xc00020f900) Stream removed, broadcasting: 3\nI0203 00:28:06.423740 4019 log.go:181] (0xc000194370) (0xc00020fb80) Stream removed, broadcasting: 5\n" Feb 3 00:28:06.429: INFO: stdout: "affinity-nodeport-timeout-dcjqw" Feb 3 00:28:26.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:31097/' Feb 3 00:28:26.692: INFO: stderr: "I0203 00:28:26.579088 4037 log.go:181] (0xc00018c370) (0xc000989400) Create stream\nI0203 00:28:26.579156 4037 log.go:181] (0xc00018c370) (0xc000989400) Stream added, broadcasting: 1\nI0203 00:28:26.581344 4037 log.go:181] (0xc00018c370) Reply frame received for 1\nI0203 00:28:26.581407 4037 log.go:181] (0xc00018c370) (0xc0006ac280) Create stream\nI0203 00:28:26.581431 4037 log.go:181] (0xc00018c370) (0xc0006ac280) Stream added, broadcasting: 3\nI0203 00:28:26.582415 4037 log.go:181] (0xc00018c370) Reply frame received for 3\nI0203 00:28:26.582455 4037 log.go:181] (0xc00018c370) (0xc00091e1e0) Create stream\nI0203 00:28:26.582469 4037 log.go:181] (0xc00018c370) (0xc00091e1e0) Stream added, broadcasting: 5\nI0203 00:28:26.583373 4037 log.go:181] (0xc00018c370) Reply frame received for 5\nI0203 00:28:26.676768 4037 log.go:181] (0xc00018c370) Data frame received for 5\nI0203 00:28:26.676798 4037 log.go:181] (0xc00091e1e0) (5) Data frame handling\nI0203 00:28:26.676817 4037 log.go:181] (0xc00091e1e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:26.682019 4037 log.go:181] (0xc00018c370) Data frame received for 3\nI0203 00:28:26.682031 4037 log.go:181] (0xc0006ac280) (3) Data frame handling\nI0203 00:28:26.682038 4037 log.go:181] (0xc0006ac280) (3) Data frame sent\nI0203 00:28:26.682788 4037 log.go:181] (0xc00018c370) Data frame received for 5\nI0203 00:28:26.682809 4037 log.go:181] (0xc00091e1e0) (5) Data frame handling\nI0203 00:28:26.682841 4037 log.go:181] (0xc00018c370) Data frame received for 3\nI0203 00:28:26.682870 4037 log.go:181] (0xc0006ac280) (3) Data frame handling\nI0203 00:28:26.684488 4037 log.go:181] (0xc00018c370) Data frame received for 1\nI0203 00:28:26.684524 4037 log.go:181] (0xc000989400) (1) Data frame handling\nI0203 00:28:26.684549 4037 log.go:181] (0xc000989400) (1) Data frame sent\nI0203 00:28:26.684572 4037 log.go:181] (0xc00018c370) (0xc000989400) Stream removed, broadcasting: 1\nI0203 00:28:26.684598 4037 log.go:181] (0xc00018c370) Go away received\nI0203 00:28:26.684945 4037 log.go:181] (0xc00018c370) (0xc000989400) Stream removed, broadcasting: 1\nI0203 00:28:26.684960 4037 log.go:181] (0xc00018c370) (0xc0006ac280) Stream removed, broadcasting: 3\nI0203 00:28:26.684967 4037 log.go:181] (0xc00018c370) (0xc00091e1e0) Stream removed, broadcasting: 5\n" Feb 3 00:28:26.692: INFO: stdout: "affinity-nodeport-timeout-dcjqw" Feb 3 00:28:46.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:31097/' Feb 3 00:28:46.943: INFO: stderr: "I0203 00:28:46.831202 4055 log.go:181] (0xc0000a9080) (0xc000a503c0) Create stream\nI0203 00:28:46.831256 4055 log.go:181] (0xc0000a9080) (0xc000a503c0) Stream added, broadcasting: 1\nI0203 00:28:46.833458 4055 log.go:181] (0xc0000a9080) Reply frame received for 1\nI0203 00:28:46.833497 4055 log.go:181] (0xc0000a9080) (0xc000445a40) Create stream\nI0203 00:28:46.833510 4055 log.go:181] (0xc0000a9080) (0xc000445a40) Stream added, broadcasting: 3\nI0203 00:28:46.834583 4055 log.go:181] (0xc0000a9080) Reply frame received for 3\nI0203 00:28:46.834630 4055 log.go:181] (0xc0000a9080) (0xc000445cc0) Create stream\nI0203 00:28:46.834649 4055 log.go:181] (0xc0000a9080) (0xc000445cc0) Stream added, broadcasting: 5\nI0203 00:28:46.835427 4055 log.go:181] (0xc0000a9080) Reply frame received for 5\nI0203 00:28:46.927372 4055 log.go:181] (0xc0000a9080) Data frame received for 5\nI0203 00:28:46.927394 4055 log.go:181] (0xc000445cc0) (5) Data frame handling\nI0203 00:28:46.927407 4055 log.go:181] (0xc000445cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:28:46.933089 4055 log.go:181] (0xc0000a9080) Data frame received for 3\nI0203 00:28:46.933109 4055 log.go:181] (0xc000445a40) (3) Data frame handling\nI0203 00:28:46.933119 4055 log.go:181] (0xc000445a40) (3) Data frame sent\nI0203 00:28:46.933853 4055 log.go:181] (0xc0000a9080) Data frame received for 5\nI0203 00:28:46.933876 4055 log.go:181] (0xc000445cc0) (5) Data frame handling\nI0203 00:28:46.934199 4055 log.go:181] (0xc0000a9080) Data frame received for 3\nI0203 00:28:46.934224 4055 log.go:181] (0xc000445a40) (3) Data frame handling\nI0203 00:28:46.936122 4055 log.go:181] (0xc0000a9080) Data frame received for 1\nI0203 00:28:46.936149 4055 log.go:181] (0xc000a503c0) (1) Data frame handling\nI0203 00:28:46.936164 4055 log.go:181] (0xc000a503c0) (1) Data frame sent\nI0203 00:28:46.936184 4055 log.go:181] (0xc0000a9080) (0xc000a503c0) Stream removed, broadcasting: 1\nI0203 00:28:46.936238 4055 log.go:181] (0xc0000a9080) Go away received\nI0203 00:28:46.936613 4055 log.go:181] (0xc0000a9080) (0xc000a503c0) Stream removed, broadcasting: 1\nI0203 00:28:46.936631 4055 log.go:181] (0xc0000a9080) (0xc000445a40) Stream removed, broadcasting: 3\nI0203 00:28:46.936640 4055 log.go:181] (0xc0000a9080) (0xc000445cc0) Stream removed, broadcasting: 5\n" Feb 3 00:28:46.943: INFO: stdout: "affinity-nodeport-timeout-dcjqw" Feb 3 00:29:06.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7267 exec execpod-affinitybz6lb -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:31097/' Feb 3 00:29:07.189: INFO: stderr: "I0203 00:29:07.086737 4073 log.go:181] (0xc00003a420) (0xc000547f40) Create stream\nI0203 00:29:07.086799 4073 log.go:181] (0xc00003a420) (0xc000547f40) Stream added, broadcasting: 1\nI0203 00:29:07.088585 4073 log.go:181] (0xc00003a420) Reply frame received for 1\nI0203 00:29:07.088641 4073 log.go:181] (0xc00003a420) (0xc0002143c0) Create stream\nI0203 00:29:07.088672 4073 log.go:181] (0xc00003a420) (0xc0002143c0) Stream added, broadcasting: 3\nI0203 00:29:07.089777 4073 log.go:181] (0xc00003a420) Reply frame received for 3\nI0203 00:29:07.089827 4073 log.go:181] (0xc00003a420) (0xc000b84be0) Create stream\nI0203 00:29:07.089840 4073 log.go:181] (0xc00003a420) (0xc000b84be0) Stream added, broadcasting: 5\nI0203 00:29:07.090717 4073 log.go:181] (0xc00003a420) Reply frame received for 5\nI0203 00:29:07.176745 4073 log.go:181] (0xc00003a420) Data frame received for 5\nI0203 00:29:07.176779 4073 log.go:181] (0xc000b84be0) (5) Data frame handling\nI0203 00:29:07.176801 4073 log.go:181] (0xc000b84be0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31097/\nI0203 00:29:07.179652 4073 log.go:181] (0xc00003a420) Data frame received for 3\nI0203 00:29:07.179671 4073 log.go:181] (0xc0002143c0) (3) Data frame handling\nI0203 00:29:07.179686 4073 log.go:181] (0xc0002143c0) (3) Data frame sent\nI0203 00:29:07.180206 4073 log.go:181] (0xc00003a420) Data frame received for 5\nI0203 00:29:07.180226 4073 log.go:181] (0xc000b84be0) (5) Data frame handling\nI0203 00:29:07.180351 4073 log.go:181] (0xc00003a420) Data frame received for 3\nI0203 00:29:07.180382 4073 log.go:181] (0xc0002143c0) (3) Data frame handling\nI0203 00:29:07.181871 4073 log.go:181] (0xc00003a420) Data frame received for 1\nI0203 00:29:07.181927 4073 log.go:181] (0xc000547f40) (1) Data frame handling\nI0203 00:29:07.181945 4073 log.go:181] (0xc000547f40) (1) Data frame sent\nI0203 00:29:07.181956 4073 log.go:181] (0xc00003a420) (0xc000547f40) Stream removed, broadcasting: 1\nI0203 00:29:07.181965 4073 log.go:181] (0xc00003a420) Go away received\nI0203 00:29:07.182317 4073 log.go:181] (0xc00003a420) (0xc000547f40) Stream removed, broadcasting: 1\nI0203 00:29:07.182335 4073 log.go:181] (0xc00003a420) (0xc0002143c0) Stream removed, broadcasting: 3\nI0203 00:29:07.182344 4073 log.go:181] (0xc00003a420) (0xc000b84be0) Stream removed, broadcasting: 5\n" Feb 3 00:29:07.189: INFO: stdout: "affinity-nodeport-timeout-djlqt" Feb 3 00:29:07.189: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7267, will wait for the garbage collector to delete the pods Feb 3 00:29:07.370: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 81.78591ms Feb 3 00:29:07.970: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.212667ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:29:40.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7267" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:111.149 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":279,"skipped":4859,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:29:40.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:29:40.447: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 3 00:29:40.473: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 3 00:29:45.482: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 00:29:45.483: INFO: Creating deployment "test-rolling-update-deployment" Feb 3 00:29:45.488: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 3 00:29:45.499: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 3 00:29:47.508: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 3 00:29:47.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908985, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908985, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908985, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747908985, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-6b6bf9df46\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:29:49.555: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 3 00:29:49.563: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2779 dabf59b6-c65a-494f-9aac-69e9b55540e8 4196491 1 2021-02-03 00:29:45 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-02-03 00:29:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-03 00:29:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00541a818 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-03 00:29:45 +0000 UTC,LastTransitionTime:2021-02-03 00:29:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-02-03 00:29:48 +0000 UTC,LastTransitionTime:2021-02-03 00:29:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 3 00:29:49.565: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-2779 983852f9-9249-44f4-9b87-7d2bbaf151c7 4196479 1 2021-02-03 00:29:45 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment dabf59b6-c65a-494f-9aac-69e9b55540e8 0xc00541acb7 0xc00541acb8}] [] [{kube-controller-manager Update apps/v1 2021-02-03 00:29:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dabf59b6-c65a-494f-9aac-69e9b55540e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00541ad48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 3 00:29:49.565: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 3 00:29:49.566: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2779 968f5703-9d22-4ede-a39e-9c09c7bf60d5 4196490 2 2021-02-03 00:29:40 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment dabf59b6-c65a-494f-9aac-69e9b55540e8 0xc00541aba7 0xc00541aba8}] [] [{e2e.test Update apps/v1 2021-02-03 00:29:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-03 00:29:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dabf59b6-c65a-494f-9aac-69e9b55540e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00541ac48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 3 00:29:49.568: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-blgkt" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-blgkt test-rolling-update-deployment-6b6bf9df46- deployment-2779 09d53d30-fbc3-42d0-966d-a31336253970 4196478 0 2021-02-03 00:29:45 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 983852f9-9249-44f4-9b87-7d2bbaf151c7 0xc00541b147 0xc00541b148}] [] [{kube-controller-manager Update v1 2021-02-03 00:29:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"983852f9-9249-44f4-9b87-7d2bbaf151c7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-03 00:29:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.217\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dzwv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dzwv7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dzwv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:29:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:29:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:29:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:29:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.217,StartTime:2021-02-03 00:29:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 00:29:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://8610e047a5c8025c1de43b7d0c87091eb8f359975ba23e6eb6db152f1a92d974,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:29:49.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2779" for this suite. • [SLOW TEST:9.217 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":280,"skipped":4874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:29:49.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 3 00:29:50.138: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 00:30:50.166: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Feb 3 00:30:50.229: INFO: Created pod: pod0-sched-preemption-low-priority Feb 3 00:30:50.347: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:31:44.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1917" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:114.979 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":309,"completed":281,"skipped":4904,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:31:44.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating replication controller my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f Feb 3 00:31:45.248: INFO: Pod name my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f: Found 0 pods out of 1 Feb 3 00:31:50.251: INFO: Pod name my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f: Found 1 pods out of 1 Feb 3 00:31:50.251: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f" are running Feb 3 00:31:50.253: INFO: Pod "my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f-2f4lg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:31:45 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:31:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:31:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:31:45 +0000 UTC Reason: Message:}]) Feb 3 00:31:50.254: INFO: Trying to dial the pod Feb 3 00:31:55.267: INFO: Controller my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f: Got expected result from replica 1 [my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f-2f4lg]: "my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f-2f4lg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:31:55.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-838" for this suite. • [SLOW TEST:10.724 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":282,"skipped":4926,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:31:55.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 3 00:31:55.359: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:31:55.374: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:31:55.393: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Feb 3 00:31:55.403: INFO: rally-0a12c122-7dnmol6z-vwbwf from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-0a12c122-fagfvvpw-sskvj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:54 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-0a12c122-iqj2mcat-2hfpj from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-0a12c122-iqj2mcat-swp7f from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-0a12c122-iqj2mcat ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 3 00:31:55.403: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 3 00:31:55.403: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container chaos-mesh ready: true, restart count 0 Feb 3 00:31:55.403: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 00:31:55.403: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:31:55.403: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:31:55.403: INFO: my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f-2f4lg from replication-controller-838 started at 2021-02-03 00:31:45 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.403: INFO: Container my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f ready: true, restart count 0 Feb 3 00:31:55.403: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Feb 3 00:31:55.413: INFO: rally-0a12c122-4xacdhsf-44v5r from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-0a12c122-4xacdhsf-5c974 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-0a12c122-4xacdhsf ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-0a12c122-7dnmol6z-n9ztn from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:38 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-0a12c122-7dnmol6z ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-0a12c122-fagfvvpw-cxsgt from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:53 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-0a12c122-fagfvvpw ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-0a12c122-lqiac6cu-6fsz6 from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-0a12c122-lqiac6cu-99jsp from c-rally-0a12c122-vmv18pta started at 2021-01-28 03:37:16 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-0a12c122-lqiac6cu ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 3 00:31:55.413: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Feb 3 00:31:55.413: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container chaos-daemon ready: true, restart count 0 Feb 3 00:31:55.413: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:31:55.413: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:31:55.413: INFO: pod1-sched-preemption-medium-priority from sched-preemption-1917 started at 2021-02-03 00:30:55 +0000 UTC (1 container statuses recorded) Feb 3 00:31:55.413: INFO: Container pod1-sched-preemption-medium-priority ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-4xacdhsf-44v5r requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-4xacdhsf-5c974 requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-7dnmol6z-n9ztn requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-7dnmol6z-vwbwf requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-fagfvvpw-cxsgt requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-fagfvvpw-sskvj requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-iqj2mcat-2hfpj requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-iqj2mcat-swp7f requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-lqiac6cu-6fsz6 requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-0a12c122-lqiac6cu-99jsp requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-3kmika18-pdtzv requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-3kmika18-pllzg requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-4cyi45kq-j5tzz requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-4cyi45kq-knr4r requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-f3hls6a3-57dwc requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-f3hls6a3-dwt8n requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-1y3amfc0-hh9qk requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-1y3amfc0-lp8st requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-9pqmjehi-85slb requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-9pqmjehi-9zwjj requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-vnukxqu0-llj24 requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod rally-a8f48c6d-vnukxqu0-v85kr requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod chaos-controller-manager-69c479c674-s796v requesting resource cpu=25m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod chaos-daemon-ffkg7 requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod chaos-daemon-lv692 requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod kindnet-8wggd requesting resource cpu=100m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod kindnet-psm25 requesting resource cpu=100m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod kube-proxy-29gxg requesting resource cpu=0m on Node leguer-worker2 Feb 3 00:31:55.508: INFO: Pod kube-proxy-bmbcs requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod my-hostname-basic-940f671c-232b-4350-b552-79d2fe75f70f-2f4lg requesting resource cpu=0m on Node leguer-worker Feb 3 00:31:55.508: INFO: Pod pod1-sched-preemption-medium-priority requesting resource cpu=0m on Node leguer-worker2 STEP: Starting Pods to consume most of the cluster CPU. Feb 3 00:31:55.508: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker2 Feb 3 00:31:55.514: INFO: Creating a pod which consumes cpu=11112m on Node leguer-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e.1660158234255601], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1885/filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e.166015829a153359], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e.1660158306cbcd56], Reason = [Created], Message = [Created container filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e] STEP: Considering event: Type = [Normal], Name = [filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e.16601583163ee848], Reason = [Started], Message = [Started container filler-pod-80a859b2-dd67-43e9-af8b-0e8410ccb99e] STEP: Considering event: Type = [Normal], Name = [filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b.166015823420fe55], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1885/filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b.1660158282b8751e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b.16601582df16a713], Reason = [Created], Message = [Created container filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b] STEP: Considering event: Type = [Normal], Name = [filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b.166015830a06f89b], Reason = [Started], Message = [Started container filler-pod-fd7d04e5-c71c-4e29-8478-949043ac6d0b] STEP: Considering event: Type = [Warning], Name = [additional-pod.166015839abb9d9a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:32:02.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1885" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:7.370 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":309,"completed":283,"skipped":4933,"failed":0} S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:32:02.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 3 00:32:07.124: INFO: &Pod{ObjectMeta:{send-events-a0aac547-e444-4144-8710-e37173912a30 events-5954 39a81c32-d2ae-458b-9470-723e6af46154 4196932 0 2021-02-03 00:32:03 +0000 UTC map[name:foo time:59579018] map[] [] [] [{e2e.test Update v1 2021-02-03 00:32:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-03 00:32:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.222\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v22jm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v22jm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v22jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:32:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-03 00:32:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.222,StartTime:2021-02-03 00:32:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-03 00:32:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://4be9f55023d830e7d5f3c1ec3702278a419b01ab34c91c50811ef90cd9b4c6bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Feb 3 00:32:09.128: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 3 00:32:11.134: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:32:11.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5954" for this suite. • [SLOW TEST:8.549 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":309,"completed":284,"skipped":4934,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:32:11.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-2241a250-5af8-4e96-be66-8fe2df04edbf in namespace container-probe-4970 Feb 3 00:32:15.352: INFO: Started pod busybox-2241a250-5af8-4e96-be66-8fe2df04edbf in namespace container-probe-4970 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 00:32:15.356: INFO: Initial restart count of pod busybox-2241a250-5af8-4e96-be66-8fe2df04edbf is 0 Feb 3 00:33:05.480: INFO: Restart count of pod container-probe-4970/busybox-2241a250-5af8-4e96-be66-8fe2df04edbf is now 1 (50.123736195s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:33:05.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4970" for this suite. • [SLOW TEST:54.341 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":285,"skipped":4939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:33:05.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Feb 3 00:33:05.644: INFO: Waiting up to 5m0s for pod "downward-api-341637a8-9928-463b-9c6a-35aa16404c83" in namespace "downward-api-5726" to be "Succeeded or Failed" Feb 3 00:33:05.647: INFO: Pod "downward-api-341637a8-9928-463b-9c6a-35aa16404c83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505966ms Feb 3 00:33:07.651: INFO: Pod "downward-api-341637a8-9928-463b-9c6a-35aa16404c83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006500626s Feb 3 00:33:09.655: INFO: Pod "downward-api-341637a8-9928-463b-9c6a-35aa16404c83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011068752s Feb 3 00:33:11.661: INFO: Pod "downward-api-341637a8-9928-463b-9c6a-35aa16404c83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016254106s STEP: Saw pod success Feb 3 00:33:11.661: INFO: Pod "downward-api-341637a8-9928-463b-9c6a-35aa16404c83" satisfied condition "Succeeded or Failed" Feb 3 00:33:11.664: INFO: Trying to get logs from node leguer-worker pod downward-api-341637a8-9928-463b-9c6a-35aa16404c83 container dapi-container: STEP: delete the pod Feb 3 00:33:11.725: INFO: Waiting for pod downward-api-341637a8-9928-463b-9c6a-35aa16404c83 to disappear Feb 3 00:33:11.743: INFO: Pod downward-api-341637a8-9928-463b-9c6a-35aa16404c83 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:33:11.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5726" for this suite. • [SLOW TEST:6.250 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":309,"completed":286,"skipped":4971,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:33:11.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-826209b3-81ca-4b99-86a1-c818df9bfd0c STEP: Creating a pod to test consume secrets Feb 3 00:33:11.914: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed" in namespace "projected-2048" to be "Succeeded or Failed" Feb 3 00:33:11.923: INFO: Pod "pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.619674ms Feb 3 00:33:13.928: INFO: Pod "pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013936338s Feb 3 00:33:15.931: INFO: Pod "pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017841453s STEP: Saw pod success Feb 3 00:33:15.931: INFO: Pod "pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed" satisfied condition "Succeeded or Failed" Feb 3 00:33:15.935: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed container projected-secret-volume-test: STEP: delete the pod Feb 3 00:33:15.966: INFO: Waiting for pod pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed to disappear Feb 3 00:33:15.970: INFO: Pod pod-projected-secrets-4dcfa641-5030-47d2-a1b4-809dbde5e5ed no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:33:15.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2048" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":287,"skipped":4983,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:33:15.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 3 00:33:16.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197174 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:33:16.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197174 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 3 00:33:26.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197216 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:33:26.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197216 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 3 00:33:36.077: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197236 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:33:36.078: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197236 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 3 00:33:46.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197258 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:33:46.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7749 e40d1c95-1b93-444a-8770-d1c7154d6cd3 4197258 0 2021-02-03 00:33:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 3 00:33:56.098: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7749 26304129-21e6-4ffb-aa20-9aff328f983a 4197278 0 2021-02-03 00:33:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:33:56.098: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7749 26304129-21e6-4ffb-aa20-9aff328f983a 4197278 0 2021-02-03 00:33:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 3 00:34:06.107: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7749 26304129-21e6-4ffb-aa20-9aff328f983a 4197298 0 2021-02-03 00:33:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:34:06.107: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7749 26304129-21e6-4ffb-aa20-9aff328f983a 4197298 0 2021-02-03 00:33:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-03 00:33:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:34:16.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7749" for this suite. • [SLOW TEST:60.142 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":309,"completed":288,"skipped":4986,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:34:16.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting the proxy server Feb 3 00:34:16.196: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9384 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:34:16.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9384" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":309,"completed":289,"skipped":5004,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:34:16.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-58d6c25b-8f78-4420-b1cd-48569546267b in namespace container-probe-1262 Feb 3 00:34:20.527: INFO: Started pod liveness-58d6c25b-8f78-4420-b1cd-48569546267b in namespace container-probe-1262 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 00:34:20.530: INFO: Initial restart count of pod liveness-58d6c25b-8f78-4420-b1cd-48569546267b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:38:21.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1262" for this suite. • [SLOW TEST:245.123 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":309,"completed":290,"skipped":5012,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:38:21.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 3 00:38:21.789: INFO: >>> kubeConfig: /root/.kube/config Feb 3 00:38:25.343: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:38:39.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3539" for this suite. • [SLOW TEST:17.694 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":309,"completed":291,"skipped":5022,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:38:39.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:38:39.184: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:38:43.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4141" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":309,"completed":292,"skipped":5034,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:38:43.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 3 00:38:43.600: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2610 1835cf47-4800-4757-99cc-9e602b130c20 4197917 0 2021-02-03 00:38:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-03 00:38:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:38:43.601: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2610 1835cf47-4800-4757-99cc-9e602b130c20 4197918 0 2021-02-03 00:38:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-03 00:38:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:38:43.601: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2610 1835cf47-4800-4757-99cc-9e602b130c20 4197919 0 2021-02-03 00:38:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-03 00:38:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 3 00:38:53.646: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2610 1835cf47-4800-4757-99cc-9e602b130c20 4197956 0 2021-02-03 00:38:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-03 00:38:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:38:53.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2610 1835cf47-4800-4757-99cc-9e602b130c20 4197957 0 2021-02-03 00:38:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-03 00:38:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 3 00:38:53.646: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2610 1835cf47-4800-4757-99cc-9e602b130c20 4197958 0 2021-02-03 00:38:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-03 00:38:43 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:38:53.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2610" for this suite. • [SLOW TEST:10.285 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":309,"completed":293,"skipped":5037,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:38:53.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:38:53.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-331" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":309,"completed":294,"skipped":5039,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:38:53.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:38:54.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9862" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":309,"completed":295,"skipped":5058,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:38:54.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 3 00:39:02.217: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 00:39:02.269: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 00:39:04.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 00:39:04.318: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 00:39:06.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 00:39:06.275: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 00:39:08.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 00:39:08.274: INFO: Pod pod-with-poststart-http-hook still exists Feb 3 00:39:10.269: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 3 00:39:10.272: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:39:10.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-743" for this suite. • [SLOW TEST:16.251 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":309,"completed":296,"skipped":5072,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:39:10.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Feb 3 00:39:14.432: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3805 PodName:var-expansion-fdcba706-8afe-4195-8552-3f9dd1795cfc ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:39:14.432: INFO: >>> kubeConfig: /root/.kube/config I0203 00:39:14.476951 7 log.go:181] (0xc0067da8f0) (0xc001999b80) Create stream I0203 00:39:14.476985 7 log.go:181] (0xc0067da8f0) (0xc001999b80) Stream added, broadcasting: 1 I0203 00:39:14.478963 7 log.go:181] (0xc0067da8f0) Reply frame received for 1 I0203 00:39:14.478999 7 log.go:181] (0xc0067da8f0) (0xc003e81860) Create stream I0203 00:39:14.479016 7 log.go:181] (0xc0067da8f0) (0xc003e81860) Stream added, broadcasting: 3 I0203 00:39:14.479811 7 log.go:181] (0xc0067da8f0) Reply frame received for 3 I0203 00:39:14.479845 7 log.go:181] (0xc0067da8f0) (0xc00404c5a0) Create stream I0203 00:39:14.479858 7 log.go:181] (0xc0067da8f0) (0xc00404c5a0) Stream added, broadcasting: 5 I0203 00:39:14.480684 7 log.go:181] (0xc0067da8f0) Reply frame received for 5 I0203 00:39:14.571448 7 log.go:181] (0xc0067da8f0) Data frame received for 5 I0203 00:39:14.571492 7 log.go:181] (0xc00404c5a0) (5) Data frame handling I0203 00:39:14.571529 7 log.go:181] (0xc0067da8f0) Data frame received for 3 I0203 00:39:14.571562 7 log.go:181] (0xc003e81860) (3) Data frame handling I0203 00:39:14.572770 7 log.go:181] (0xc0067da8f0) Data frame received for 1 I0203 00:39:14.572794 7 log.go:181] (0xc001999b80) (1) Data frame handling I0203 00:39:14.572809 7 log.go:181] (0xc001999b80) (1) Data frame sent I0203 00:39:14.572828 7 log.go:181] (0xc0067da8f0) (0xc001999b80) Stream removed, broadcasting: 1 I0203 00:39:14.572938 7 log.go:181] (0xc0067da8f0) Go away received I0203 00:39:14.573003 7 log.go:181] (0xc0067da8f0) (0xc001999b80) Stream removed, broadcasting: 1 I0203 00:39:14.573026 7 log.go:181] (0xc0067da8f0) (0xc003e81860) Stream removed, broadcasting: 3 I0203 00:39:14.573041 7 log.go:181] (0xc0067da8f0) (0xc00404c5a0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Feb 3 00:39:14.576: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3805 PodName:var-expansion-fdcba706-8afe-4195-8552-3f9dd1795cfc ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:39:14.576: INFO: >>> kubeConfig: /root/.kube/config I0203 00:39:14.600024 7 log.go:181] (0xc008498580) (0xc00404caa0) Create stream I0203 00:39:14.600051 7 log.go:181] (0xc008498580) (0xc00404caa0) Stream added, broadcasting: 1 I0203 00:39:14.610152 7 log.go:181] (0xc008498580) Reply frame received for 1 I0203 00:39:14.610196 7 log.go:181] (0xc008498580) (0xc002efc000) Create stream I0203 00:39:14.610208 7 log.go:181] (0xc008498580) (0xc002efc000) Stream added, broadcasting: 3 I0203 00:39:14.611270 7 log.go:181] (0xc008498580) Reply frame received for 3 I0203 00:39:14.611322 7 log.go:181] (0xc008498580) (0xc003e80280) Create stream I0203 00:39:14.611337 7 log.go:181] (0xc008498580) (0xc003e80280) Stream added, broadcasting: 5 I0203 00:39:14.612204 7 log.go:181] (0xc008498580) Reply frame received for 5 I0203 00:39:14.682230 7 log.go:181] (0xc008498580) Data frame received for 5 I0203 00:39:14.682266 7 log.go:181] (0xc008498580) Data frame received for 3 I0203 00:39:14.682296 7 log.go:181] (0xc002efc000) (3) Data frame handling I0203 00:39:14.682322 7 log.go:181] (0xc003e80280) (5) Data frame handling I0203 00:39:14.683653 7 log.go:181] (0xc008498580) Data frame received for 1 I0203 00:39:14.683675 7 log.go:181] (0xc00404caa0) (1) Data frame handling I0203 00:39:14.683695 7 log.go:181] (0xc00404caa0) (1) Data frame sent I0203 00:39:14.683721 7 log.go:181] (0xc008498580) (0xc00404caa0) Stream removed, broadcasting: 1 I0203 00:39:14.683805 7 log.go:181] (0xc008498580) (0xc00404caa0) Stream removed, broadcasting: 1 I0203 00:39:14.683818 7 log.go:181] (0xc008498580) (0xc002efc000) Stream removed, broadcasting: 3 I0203 00:39:14.683953 7 log.go:181] (0xc008498580) Go away received I0203 00:39:14.683984 7 log.go:181] (0xc008498580) (0xc003e80280) Stream removed, broadcasting: 5 STEP: updating the annotation value Feb 3 00:39:15.198: INFO: Successfully updated pod "var-expansion-fdcba706-8afe-4195-8552-3f9dd1795cfc" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Feb 3 00:39:15.216: INFO: Deleting pod "var-expansion-fdcba706-8afe-4195-8552-3f9dd1795cfc" in namespace "var-expansion-3805" Feb 3 00:39:15.220: INFO: Wait up to 5m0s for pod "var-expansion-fdcba706-8afe-4195-8552-3f9dd1795cfc" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:40:21.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3805" for this suite. • [SLOW TEST:70.976 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":309,"completed":297,"skipped":5087,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:40:21.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Feb 3 00:40:21.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 create -f -' Feb 3 00:40:25.074: INFO: stderr: "" Feb 3 00:40:25.074: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 00:40:25.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:25.221: INFO: stderr: "" Feb 3 00:40:25.221: INFO: stdout: "update-demo-nautilus-h82np update-demo-nautilus-kp8kz " Feb 3 00:40:25.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-h82np -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:25.316: INFO: stderr: "" Feb 3 00:40:25.316: INFO: stdout: "" Feb 3 00:40:25.316: INFO: update-demo-nautilus-h82np is created but not running Feb 3 00:40:30.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:30.427: INFO: stderr: "" Feb 3 00:40:30.427: INFO: stdout: "update-demo-nautilus-h82np update-demo-nautilus-kp8kz " Feb 3 00:40:30.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-h82np -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:30.527: INFO: stderr: "" Feb 3 00:40:30.528: INFO: stdout: "true" Feb 3 00:40:30.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-h82np -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 3 00:40:30.617: INFO: stderr: "" Feb 3 00:40:30.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 00:40:30.617: INFO: validating pod update-demo-nautilus-h82np Feb 3 00:40:30.620: INFO: got data: { "image": "nautilus.jpg" } Feb 3 00:40:30.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 00:40:30.620: INFO: update-demo-nautilus-h82np is verified up and running Feb 3 00:40:30.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-kp8kz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:30.714: INFO: stderr: "" Feb 3 00:40:30.714: INFO: stdout: "true" Feb 3 00:40:30.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-kp8kz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 3 00:40:30.806: INFO: stderr: "" Feb 3 00:40:30.806: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 00:40:30.806: INFO: validating pod update-demo-nautilus-kp8kz Feb 3 00:40:30.810: INFO: got data: { "image": "nautilus.jpg" } Feb 3 00:40:30.810: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 00:40:30.810: INFO: update-demo-nautilus-kp8kz is verified up and running STEP: scaling down the replication controller Feb 3 00:40:30.813: INFO: scanned /root for discovery docs: Feb 3 00:40:30.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Feb 3 00:40:31.956: INFO: stderr: "" Feb 3 00:40:31.956: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 00:40:31.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:32.055: INFO: stderr: "" Feb 3 00:40:32.055: INFO: stdout: "update-demo-nautilus-h82np update-demo-nautilus-kp8kz " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 3 00:40:37.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:37.166: INFO: stderr: "" Feb 3 00:40:37.167: INFO: stdout: "update-demo-nautilus-h82np update-demo-nautilus-kp8kz " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 3 00:40:42.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:42.280: INFO: stderr: "" Feb 3 00:40:42.280: INFO: stdout: "update-demo-nautilus-kp8kz " Feb 3 00:40:42.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-kp8kz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:42.376: INFO: stderr: "" Feb 3 00:40:42.376: INFO: stdout: "true" Feb 3 00:40:42.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-kp8kz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 3 00:40:42.473: INFO: stderr: "" Feb 3 00:40:42.473: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 00:40:42.473: INFO: validating pod update-demo-nautilus-kp8kz Feb 3 00:40:42.477: INFO: got data: { "image": "nautilus.jpg" } Feb 3 00:40:42.477: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 00:40:42.477: INFO: update-demo-nautilus-kp8kz is verified up and running STEP: scaling up the replication controller Feb 3 00:40:42.480: INFO: scanned /root for discovery docs: Feb 3 00:40:42.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Feb 3 00:40:43.611: INFO: stderr: "" Feb 3 00:40:43.611: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 3 00:40:43.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:43.708: INFO: stderr: "" Feb 3 00:40:43.709: INFO: stdout: "update-demo-nautilus-5c88l update-demo-nautilus-kp8kz " Feb 3 00:40:43.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-5c88l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:43.804: INFO: stderr: "" Feb 3 00:40:43.804: INFO: stdout: "" Feb 3 00:40:43.804: INFO: update-demo-nautilus-5c88l is created but not running Feb 3 00:40:48.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 3 00:40:48.913: INFO: stderr: "" Feb 3 00:40:48.913: INFO: stdout: "update-demo-nautilus-5c88l update-demo-nautilus-kp8kz " Feb 3 00:40:48.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-5c88l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:49.018: INFO: stderr: "" Feb 3 00:40:49.018: INFO: stdout: "true" Feb 3 00:40:49.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-5c88l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 3 00:40:49.124: INFO: stderr: "" Feb 3 00:40:49.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 00:40:49.124: INFO: validating pod update-demo-nautilus-5c88l Feb 3 00:40:49.129: INFO: got data: { "image": "nautilus.jpg" } Feb 3 00:40:49.129: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 00:40:49.129: INFO: update-demo-nautilus-5c88l is verified up and running Feb 3 00:40:49.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-kp8kz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 3 00:40:49.232: INFO: stderr: "" Feb 3 00:40:49.232: INFO: stdout: "true" Feb 3 00:40:49.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods update-demo-nautilus-kp8kz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 3 00:40:49.334: INFO: stderr: "" Feb 3 00:40:49.334: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 3 00:40:49.334: INFO: validating pod update-demo-nautilus-kp8kz Feb 3 00:40:49.337: INFO: got data: { "image": "nautilus.jpg" } Feb 3 00:40:49.337: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 3 00:40:49.337: INFO: update-demo-nautilus-kp8kz is verified up and running STEP: using delete to clean up resources Feb 3 00:40:49.337: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 delete --grace-period=0 --force -f -' Feb 3 00:40:49.451: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 00:40:49.451: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 3 00:40:49.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get rc,svc -l name=update-demo --no-headers' Feb 3 00:40:49.545: INFO: stderr: "No resources found in kubectl-8640 namespace.\n" Feb 3 00:40:49.545: INFO: stdout: "" Feb 3 00:40:49.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 00:40:49.648: INFO: stderr: "" Feb 3 00:40:49.648: INFO: stdout: "update-demo-nautilus-5c88l\nupdate-demo-nautilus-kp8kz\n" Feb 3 00:40:50.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get rc,svc -l name=update-demo --no-headers' Feb 3 00:40:50.351: INFO: stderr: "No resources found in kubectl-8640 namespace.\n" Feb 3 00:40:50.351: INFO: stdout: "" Feb 3 00:40:50.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8640 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 00:40:50.505: INFO: stderr: "" Feb 3 00:40:50.505: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:40:50.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8640" for this suite. • [SLOW TEST:29.256 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":309,"completed":298,"skipped":5101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:40:50.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:40:50.646: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-739e25a4-2fee-4a74-90b2-07a0295403ca" in namespace "security-context-test-3909" to be "Succeeded or Failed" Feb 3 00:40:50.930: INFO: Pod "busybox-readonly-false-739e25a4-2fee-4a74-90b2-07a0295403ca": Phase="Pending", Reason="", readiness=false. Elapsed: 284.195813ms Feb 3 00:40:52.935: INFO: Pod "busybox-readonly-false-739e25a4-2fee-4a74-90b2-07a0295403ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289162131s Feb 3 00:40:54.940: INFO: Pod "busybox-readonly-false-739e25a4-2fee-4a74-90b2-07a0295403ca": Phase="Running", Reason="", readiness=true. Elapsed: 4.294046417s Feb 3 00:40:56.944: INFO: Pod "busybox-readonly-false-739e25a4-2fee-4a74-90b2-07a0295403ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.298943854s Feb 3 00:40:56.945: INFO: Pod "busybox-readonly-false-739e25a4-2fee-4a74-90b2-07a0295403ca" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:40:56.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3909" for this suite. • [SLOW TEST:6.442 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":309,"completed":299,"skipped":5143,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:40:56.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:40:57.049: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 3 00:40:57.058: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:40:57.082: INFO: Number of nodes with available pods: 0 Feb 3 00:40:57.082: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:40:58.087: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:40:58.091: INFO: Number of nodes with available pods: 0 Feb 3 00:40:58.091: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:40:59.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:40:59.186: INFO: Number of nodes with available pods: 0 Feb 3 00:40:59.186: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:00.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:00.186: INFO: Number of nodes with available pods: 0 Feb 3 00:41:00.186: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:01.088: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:01.103: INFO: Number of nodes with available pods: 1 Feb 3 00:41:01.103: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:02.087: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:02.099: INFO: Number of nodes with available pods: 2 Feb 3 00:41:02.099: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 3 00:41:02.123: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:02.123: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:02.146: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:03.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:03.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:03.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:04.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:04.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:04.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:05.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:05.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:05.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:05.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:06.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:06.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:06.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:06.159: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:07.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:07.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:07.151: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:07.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:08.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:08.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:08.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:08.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:09.175: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:09.175: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:09.175: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:09.179: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:10.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:10.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:10.151: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:10.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:11.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:11.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:11.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:11.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:12.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:12.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:12.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:12.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:13.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:13.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:13.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:13.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:14.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:14.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:14.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:14.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:15.163: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:15.163: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:15.163: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:15.166: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:16.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:16.152: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:16.152: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:16.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:17.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:17.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:17.151: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:17.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:18.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:18.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:18.151: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:18.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:19.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:19.151: INFO: Wrong image for pod: daemon-set-wxvfx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:19.151: INFO: Pod daemon-set-wxvfx is not available Feb 3 00:41:19.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:20.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:20.151: INFO: Pod daemon-set-wmq4q is not available Feb 3 00:41:20.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:21.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:21.152: INFO: Pod daemon-set-wmq4q is not available Feb 3 00:41:21.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:22.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:22.152: INFO: Pod daemon-set-wmq4q is not available Feb 3 00:41:22.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:23.170: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:23.174: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:24.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:24.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:25.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:25.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:25.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:26.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:26.152: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:26.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:27.155: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:27.155: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:27.159: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:28.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:28.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:28.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:29.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:29.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:29.170: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:30.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:30.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:30.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:31.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:31.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:31.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:32.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:32.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:32.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:33.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:33.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:33.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:34.150: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:34.150: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:34.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:35.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:35.152: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:35.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:36.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:36.152: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:36.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:37.152: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:37.152: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:37.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:38.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:38.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:38.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:39.151: INFO: Wrong image for pod: daemon-set-8b47z. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Feb 3 00:41:39.151: INFO: Pod daemon-set-8b47z is not available Feb 3 00:41:39.156: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:40.165: INFO: Pod daemon-set-jq6rl is not available Feb 3 00:41:40.211: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Feb 3 00:41:40.226: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:40.230: INFO: Number of nodes with available pods: 1 Feb 3 00:41:40.230: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:41.235: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:41.246: INFO: Number of nodes with available pods: 1 Feb 3 00:41:41.246: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:42.234: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:42.238: INFO: Number of nodes with available pods: 1 Feb 3 00:41:42.238: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:43.236: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:43.240: INFO: Number of nodes with available pods: 1 Feb 3 00:41:43.240: INFO: Node leguer-worker is running more than one daemon pod Feb 3 00:41:44.235: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 3 00:41:44.239: INFO: Number of nodes with available pods: 2 Feb 3 00:41:44.239: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5642, will wait for the garbage collector to delete the pods Feb 3 00:41:44.313: INFO: Deleting DaemonSet.extensions daemon-set took: 5.349872ms Feb 3 00:41:44.913: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.234016ms Feb 3 00:42:19.917: INFO: Number of nodes with available pods: 0 Feb 3 00:42:19.917: INFO: Number of running nodes: 0, number of available pods: 0 Feb 3 00:42:19.920: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4198665"},"items":null} Feb 3 00:42:19.923: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4198665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:19.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5642" for this suite. • [SLOW TEST:82.990 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":309,"completed":300,"skipped":5157,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:19.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 3 00:42:20.757: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 3 00:42:22.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909740, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909740, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909740, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909740, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 00:42:25.808: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:42:25.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:27.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2978" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.247 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":309,"completed":301,"skipped":5177,"failed":0} [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:27.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Feb 3 00:42:27.313: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Feb 3 00:42:27.335: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:27.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-562" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":309,"completed":302,"skipped":5177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:27.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:33.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1638" for this suite. • [SLOW TEST:5.669 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":309,"completed":303,"skipped":5223,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:33.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 3 00:42:33.925: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 3 00:42:36.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909753, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909753, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909754, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909753, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 3 00:42:39.056: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:39.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-791" for this suite. STEP: Destroying namespace "webhook-791-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.215 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":309,"completed":304,"skipped":5224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:39.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 3 00:42:39.464: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the sample API server. Feb 3 00:42:40.384: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 3 00:42:42.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909760, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909760, loc:(*time.Location)(0x7962e20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909760, loc:(*time.Location)(0x7962e20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63747909760, loc:(*time.Location)(0x7962e20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 00:42:46.016: INFO: Waited 1.131392112s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:46.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5587" for this suite. • [SLOW TEST:7.461 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":309,"completed":305,"skipped":5265,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:46.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:42:47.071: INFO: Creating ReplicaSet my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79 Feb 3 00:42:47.078: INFO: Pod name my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79: Found 0 pods out of 1 Feb 3 00:42:52.116: INFO: Pod name my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79: Found 1 pods out of 1 Feb 3 00:42:52.116: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79" is running Feb 3 00:42:52.125: INFO: Pod "my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79-xjwjn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:42:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:42:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:42:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-03 00:42:47 +0000 UTC Reason: Message:}]) Feb 3 00:42:52.126: INFO: Trying to dial the pod Feb 3 00:42:57.139: INFO: Controller my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79: Got expected result from replica 1 [my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79-xjwjn]: "my-hostname-basic-5b85e850-a068-4583-99f9-92e48e438b79-xjwjn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:42:57.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2041" for this suite. • [SLOW TEST:10.338 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":306,"skipped":5275,"failed":0} [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:42:57.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 3 00:43:07.323: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.323: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:07.373890 7 log.go:181] (0xc0021d8840) (0xc0011b57c0) Create stream I0203 00:43:07.373926 7 log.go:181] (0xc0021d8840) (0xc0011b57c0) Stream added, broadcasting: 1 I0203 00:43:07.376327 7 log.go:181] (0xc0021d8840) Reply frame received for 1 I0203 00:43:07.376370 7 log.go:181] (0xc0021d8840) (0xc00404c000) Create stream I0203 00:43:07.376381 7 log.go:181] (0xc0021d8840) (0xc00404c000) Stream added, broadcasting: 3 I0203 00:43:07.377390 7 log.go:181] (0xc0021d8840) Reply frame received for 3 I0203 00:43:07.377444 7 log.go:181] (0xc0021d8840) (0xc00404c0a0) Create stream I0203 00:43:07.377457 7 log.go:181] (0xc0021d8840) (0xc00404c0a0) Stream added, broadcasting: 5 I0203 00:43:07.378246 7 log.go:181] (0xc0021d8840) Reply frame received for 5 I0203 00:43:07.454832 7 log.go:181] (0xc0021d8840) Data frame received for 3 I0203 00:43:07.454902 7 log.go:181] (0xc00404c000) (3) Data frame handling I0203 00:43:07.454935 7 log.go:181] (0xc00404c000) (3) Data frame sent I0203 00:43:07.454962 7 log.go:181] (0xc0021d8840) Data frame received for 3 I0203 00:43:07.454980 7 log.go:181] (0xc00404c000) (3) Data frame handling I0203 00:43:07.455040 7 log.go:181] (0xc0021d8840) Data frame received for 5 I0203 00:43:07.455094 7 log.go:181] (0xc00404c0a0) (5) Data frame handling I0203 00:43:07.456469 7 log.go:181] (0xc0021d8840) Data frame received for 1 I0203 00:43:07.456488 7 log.go:181] (0xc0011b57c0) (1) Data frame handling I0203 00:43:07.456507 7 log.go:181] (0xc0011b57c0) (1) Data frame sent I0203 00:43:07.456519 7 log.go:181] (0xc0021d8840) (0xc0011b57c0) Stream removed, broadcasting: 1 I0203 00:43:07.456613 7 log.go:181] (0xc0021d8840) (0xc0011b57c0) Stream removed, broadcasting: 1 I0203 00:43:07.456640 7 log.go:181] (0xc0021d8840) (0xc00404c000) Stream removed, broadcasting: 3 I0203 00:43:07.456654 7 log.go:181] (0xc0021d8840) (0xc00404c0a0) Stream removed, broadcasting: 5 Feb 3 00:43:07.456: INFO: Exec stderr: "" I0203 00:43:07.456718 7 log.go:181] (0xc0021d8840) Go away received Feb 3 00:43:07.456: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.456: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:07.490468 7 log.go:181] (0xc0021d8b00) (0xc0011b5c20) Create stream I0203 00:43:07.490493 7 log.go:181] (0xc0021d8b00) (0xc0011b5c20) Stream added, broadcasting: 1 I0203 00:43:07.492582 7 log.go:181] (0xc0021d8b00) Reply frame received for 1 I0203 00:43:07.492622 7 log.go:181] (0xc0021d8b00) (0xc003601540) Create stream I0203 00:43:07.492638 7 log.go:181] (0xc0021d8b00) (0xc003601540) Stream added, broadcasting: 3 I0203 00:43:07.493576 7 log.go:181] (0xc0021d8b00) Reply frame received for 3 I0203 00:43:07.493609 7 log.go:181] (0xc0021d8b00) (0xc00404c140) Create stream I0203 00:43:07.493620 7 log.go:181] (0xc0021d8b00) (0xc00404c140) Stream added, broadcasting: 5 I0203 00:43:07.494344 7 log.go:181] (0xc0021d8b00) Reply frame received for 5 I0203 00:43:07.570637 7 log.go:181] (0xc0021d8b00) Data frame received for 5 I0203 00:43:07.570693 7 log.go:181] (0xc00404c140) (5) Data frame handling I0203 00:43:07.570729 7 log.go:181] (0xc0021d8b00) Data frame received for 3 I0203 00:43:07.570742 7 log.go:181] (0xc003601540) (3) Data frame handling I0203 00:43:07.570756 7 log.go:181] (0xc003601540) (3) Data frame sent I0203 00:43:07.570767 7 log.go:181] (0xc0021d8b00) Data frame received for 3 I0203 00:43:07.570778 7 log.go:181] (0xc003601540) (3) Data frame handling I0203 00:43:07.572074 7 log.go:181] (0xc0021d8b00) Data frame received for 1 I0203 00:43:07.572129 7 log.go:181] (0xc0011b5c20) (1) Data frame handling I0203 00:43:07.572173 7 log.go:181] (0xc0011b5c20) (1) Data frame sent I0203 00:43:07.572200 7 log.go:181] (0xc0021d8b00) (0xc0011b5c20) Stream removed, broadcasting: 1 I0203 00:43:07.572224 7 log.go:181] (0xc0021d8b00) Go away received I0203 00:43:07.572336 7 log.go:181] (0xc0021d8b00) (0xc0011b5c20) Stream removed, broadcasting: 1 I0203 00:43:07.572363 7 log.go:181] (0xc0021d8b00) (0xc003601540) Stream removed, broadcasting: 3 I0203 00:43:07.572376 7 log.go:181] (0xc0021d8b00) (0xc00404c140) Stream removed, broadcasting: 5 Feb 3 00:43:07.572: INFO: Exec stderr: "" Feb 3 00:43:07.572: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.572: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:07.600149 7 log.go:181] (0xc0049ae9a0) (0xc002efcaa0) Create stream I0203 00:43:07.600185 7 log.go:181] (0xc0049ae9a0) (0xc002efcaa0) Stream added, broadcasting: 1 I0203 00:43:07.602746 7 log.go:181] (0xc0049ae9a0) Reply frame received for 1 I0203 00:43:07.602784 7 log.go:181] (0xc0049ae9a0) (0xc00404c1e0) Create stream I0203 00:43:07.602797 7 log.go:181] (0xc0049ae9a0) (0xc00404c1e0) Stream added, broadcasting: 3 I0203 00:43:07.603951 7 log.go:181] (0xc0049ae9a0) Reply frame received for 3 I0203 00:43:07.604000 7 log.go:181] (0xc0049ae9a0) (0xc0011b5d60) Create stream I0203 00:43:07.604012 7 log.go:181] (0xc0049ae9a0) (0xc0011b5d60) Stream added, broadcasting: 5 I0203 00:43:07.605577 7 log.go:181] (0xc0049ae9a0) Reply frame received for 5 I0203 00:43:07.660763 7 log.go:181] (0xc0049ae9a0) Data frame received for 5 I0203 00:43:07.660809 7 log.go:181] (0xc0011b5d60) (5) Data frame handling I0203 00:43:07.660957 7 log.go:181] (0xc0049ae9a0) Data frame received for 3 I0203 00:43:07.660996 7 log.go:181] (0xc00404c1e0) (3) Data frame handling I0203 00:43:07.661012 7 log.go:181] (0xc00404c1e0) (3) Data frame sent I0203 00:43:07.661026 7 log.go:181] (0xc0049ae9a0) Data frame received for 3 I0203 00:43:07.661034 7 log.go:181] (0xc00404c1e0) (3) Data frame handling I0203 00:43:07.661956 7 log.go:181] (0xc0049ae9a0) Data frame received for 1 I0203 00:43:07.661976 7 log.go:181] (0xc002efcaa0) (1) Data frame handling I0203 00:43:07.661993 7 log.go:181] (0xc002efcaa0) (1) Data frame sent I0203 00:43:07.662010 7 log.go:181] (0xc0049ae9a0) (0xc002efcaa0) Stream removed, broadcasting: 1 I0203 00:43:07.662047 7 log.go:181] (0xc0049ae9a0) Go away received I0203 00:43:07.662087 7 log.go:181] (0xc0049ae9a0) (0xc002efcaa0) Stream removed, broadcasting: 1 I0203 00:43:07.662102 7 log.go:181] (0xc0049ae9a0) (0xc00404c1e0) Stream removed, broadcasting: 3 I0203 00:43:07.662118 7 log.go:181] (0xc0049ae9a0) (0xc0011b5d60) Stream removed, broadcasting: 5 Feb 3 00:43:07.662: INFO: Exec stderr: "" Feb 3 00:43:07.662: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.662: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:07.692707 7 log.go:181] (0xc0049af080) (0xc002efcd20) Create stream I0203 00:43:07.692741 7 log.go:181] (0xc0049af080) (0xc002efcd20) Stream added, broadcasting: 1 I0203 00:43:07.694659 7 log.go:181] (0xc0049af080) Reply frame received for 1 I0203 00:43:07.694692 7 log.go:181] (0xc0049af080) (0xc002efcdc0) Create stream I0203 00:43:07.694704 7 log.go:181] (0xc0049af080) (0xc002efcdc0) Stream added, broadcasting: 3 I0203 00:43:07.695645 7 log.go:181] (0xc0049af080) Reply frame received for 3 I0203 00:43:07.695665 7 log.go:181] (0xc0049af080) (0xc00404c320) Create stream I0203 00:43:07.695672 7 log.go:181] (0xc0049af080) (0xc00404c320) Stream added, broadcasting: 5 I0203 00:43:07.696393 7 log.go:181] (0xc0049af080) Reply frame received for 5 I0203 00:43:07.775712 7 log.go:181] (0xc0049af080) Data frame received for 3 I0203 00:43:07.775762 7 log.go:181] (0xc002efcdc0) (3) Data frame handling I0203 00:43:07.775779 7 log.go:181] (0xc002efcdc0) (3) Data frame sent I0203 00:43:07.775797 7 log.go:181] (0xc0049af080) Data frame received for 3 I0203 00:43:07.775821 7 log.go:181] (0xc002efcdc0) (3) Data frame handling I0203 00:43:07.775851 7 log.go:181] (0xc0049af080) Data frame received for 5 I0203 00:43:07.775869 7 log.go:181] (0xc00404c320) (5) Data frame handling I0203 00:43:07.777922 7 log.go:181] (0xc0049af080) Data frame received for 1 I0203 00:43:07.777958 7 log.go:181] (0xc002efcd20) (1) Data frame handling I0203 00:43:07.777986 7 log.go:181] (0xc002efcd20) (1) Data frame sent I0203 00:43:07.778013 7 log.go:181] (0xc0049af080) (0xc002efcd20) Stream removed, broadcasting: 1 I0203 00:43:07.778091 7 log.go:181] (0xc0049af080) Go away received I0203 00:43:07.778220 7 log.go:181] (0xc0049af080) (0xc002efcd20) Stream removed, broadcasting: 1 I0203 00:43:07.778252 7 log.go:181] (0xc0049af080) (0xc002efcdc0) Stream removed, broadcasting: 3 I0203 00:43:07.778280 7 log.go:181] (0xc0049af080) (0xc00404c320) Stream removed, broadcasting: 5 Feb 3 00:43:07.778: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 3 00:43:07.778: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.778: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:07.810724 7 log.go:181] (0xc0048fc420) (0xc00404c640) Create stream I0203 00:43:07.810756 7 log.go:181] (0xc0048fc420) (0xc00404c640) Stream added, broadcasting: 1 I0203 00:43:07.812752 7 log.go:181] (0xc0048fc420) Reply frame received for 1 I0203 00:43:07.812808 7 log.go:181] (0xc0048fc420) (0xc0011b5f40) Create stream I0203 00:43:07.812826 7 log.go:181] (0xc0048fc420) (0xc0011b5f40) Stream added, broadcasting: 3 I0203 00:43:07.813989 7 log.go:181] (0xc0048fc420) Reply frame received for 3 I0203 00:43:07.814028 7 log.go:181] (0xc0048fc420) (0xc002efce60) Create stream I0203 00:43:07.814041 7 log.go:181] (0xc0048fc420) (0xc002efce60) Stream added, broadcasting: 5 I0203 00:43:07.814868 7 log.go:181] (0xc0048fc420) Reply frame received for 5 I0203 00:43:07.875778 7 log.go:181] (0xc0048fc420) Data frame received for 5 I0203 00:43:07.875810 7 log.go:181] (0xc002efce60) (5) Data frame handling I0203 00:43:07.875849 7 log.go:181] (0xc0048fc420) Data frame received for 3 I0203 00:43:07.875879 7 log.go:181] (0xc0011b5f40) (3) Data frame handling I0203 00:43:07.875904 7 log.go:181] (0xc0011b5f40) (3) Data frame sent I0203 00:43:07.875925 7 log.go:181] (0xc0048fc420) Data frame received for 3 I0203 00:43:07.875942 7 log.go:181] (0xc0011b5f40) (3) Data frame handling I0203 00:43:07.877530 7 log.go:181] (0xc0048fc420) Data frame received for 1 I0203 00:43:07.877565 7 log.go:181] (0xc00404c640) (1) Data frame handling I0203 00:43:07.877598 7 log.go:181] (0xc00404c640) (1) Data frame sent I0203 00:43:07.877616 7 log.go:181] (0xc0048fc420) (0xc00404c640) Stream removed, broadcasting: 1 I0203 00:43:07.877708 7 log.go:181] (0xc0048fc420) (0xc00404c640) Stream removed, broadcasting: 1 I0203 00:43:07.877732 7 log.go:181] (0xc0048fc420) (0xc0011b5f40) Stream removed, broadcasting: 3 I0203 00:43:07.877752 7 log.go:181] (0xc0048fc420) (0xc002efce60) Stream removed, broadcasting: 5 Feb 3 00:43:07.877: INFO: Exec stderr: "" I0203 00:43:07.877804 7 log.go:181] (0xc0048fc420) Go away received Feb 3 00:43:07.877: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.877: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:07.913035 7 log.go:181] (0xc005d4c840) (0xc0036017c0) Create stream I0203 00:43:07.913075 7 log.go:181] (0xc005d4c840) (0xc0036017c0) Stream added, broadcasting: 1 I0203 00:43:07.915183 7 log.go:181] (0xc005d4c840) Reply frame received for 1 I0203 00:43:07.915225 7 log.go:181] (0xc005d4c840) (0xc0010d6460) Create stream I0203 00:43:07.915238 7 log.go:181] (0xc005d4c840) (0xc0010d6460) Stream added, broadcasting: 3 I0203 00:43:07.916236 7 log.go:181] (0xc005d4c840) Reply frame received for 3 I0203 00:43:07.916287 7 log.go:181] (0xc005d4c840) (0xc002efcf00) Create stream I0203 00:43:07.916316 7 log.go:181] (0xc005d4c840) (0xc002efcf00) Stream added, broadcasting: 5 I0203 00:43:07.917316 7 log.go:181] (0xc005d4c840) Reply frame received for 5 I0203 00:43:07.990008 7 log.go:181] (0xc005d4c840) Data frame received for 5 I0203 00:43:07.990060 7 log.go:181] (0xc002efcf00) (5) Data frame handling I0203 00:43:07.990089 7 log.go:181] (0xc005d4c840) Data frame received for 3 I0203 00:43:07.990104 7 log.go:181] (0xc0010d6460) (3) Data frame handling I0203 00:43:07.990117 7 log.go:181] (0xc0010d6460) (3) Data frame sent I0203 00:43:07.990128 7 log.go:181] (0xc005d4c840) Data frame received for 3 I0203 00:43:07.990138 7 log.go:181] (0xc0010d6460) (3) Data frame handling I0203 00:43:07.991489 7 log.go:181] (0xc005d4c840) Data frame received for 1 I0203 00:43:07.991516 7 log.go:181] (0xc0036017c0) (1) Data frame handling I0203 00:43:07.991535 7 log.go:181] (0xc0036017c0) (1) Data frame sent I0203 00:43:07.991567 7 log.go:181] (0xc005d4c840) (0xc0036017c0) Stream removed, broadcasting: 1 I0203 00:43:07.991592 7 log.go:181] (0xc005d4c840) Go away received I0203 00:43:07.991665 7 log.go:181] (0xc005d4c840) (0xc0036017c0) Stream removed, broadcasting: 1 I0203 00:43:07.991690 7 log.go:181] (0xc005d4c840) (0xc0010d6460) Stream removed, broadcasting: 3 I0203 00:43:07.991709 7 log.go:181] (0xc005d4c840) (0xc002efcf00) Stream removed, broadcasting: 5 Feb 3 00:43:07.991: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 3 00:43:07.991: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:07.991: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:08.025000 7 log.go:181] (0xc0021d9600) (0xc0010d6a00) Create stream I0203 00:43:08.025026 7 log.go:181] (0xc0021d9600) (0xc0010d6a00) Stream added, broadcasting: 1 I0203 00:43:08.026996 7 log.go:181] (0xc0021d9600) Reply frame received for 1 I0203 00:43:08.027025 7 log.go:181] (0xc0021d9600) (0xc00404c6e0) Create stream I0203 00:43:08.027037 7 log.go:181] (0xc0021d9600) (0xc00404c6e0) Stream added, broadcasting: 3 I0203 00:43:08.028711 7 log.go:181] (0xc0021d9600) Reply frame received for 3 I0203 00:43:08.028741 7 log.go:181] (0xc0021d9600) (0xc003601860) Create stream I0203 00:43:08.028752 7 log.go:181] (0xc0021d9600) (0xc003601860) Stream added, broadcasting: 5 I0203 00:43:08.029725 7 log.go:181] (0xc0021d9600) Reply frame received for 5 I0203 00:43:08.093788 7 log.go:181] (0xc0021d9600) Data frame received for 5 I0203 00:43:08.093829 7 log.go:181] (0xc003601860) (5) Data frame handling I0203 00:43:08.093871 7 log.go:181] (0xc0021d9600) Data frame received for 3 I0203 00:43:08.093899 7 log.go:181] (0xc00404c6e0) (3) Data frame handling I0203 00:43:08.093918 7 log.go:181] (0xc00404c6e0) (3) Data frame sent I0203 00:43:08.093933 7 log.go:181] (0xc0021d9600) Data frame received for 3 I0203 00:43:08.093942 7 log.go:181] (0xc00404c6e0) (3) Data frame handling I0203 00:43:08.094982 7 log.go:181] (0xc0021d9600) Data frame received for 1 I0203 00:43:08.095008 7 log.go:181] (0xc0010d6a00) (1) Data frame handling I0203 00:43:08.095026 7 log.go:181] (0xc0010d6a00) (1) Data frame sent I0203 00:43:08.095038 7 log.go:181] (0xc0021d9600) (0xc0010d6a00) Stream removed, broadcasting: 1 I0203 00:43:08.095053 7 log.go:181] (0xc0021d9600) Go away received I0203 00:43:08.095189 7 log.go:181] (0xc0021d9600) (0xc0010d6a00) Stream removed, broadcasting: 1 I0203 00:43:08.095209 7 log.go:181] (0xc0021d9600) (0xc00404c6e0) Stream removed, broadcasting: 3 I0203 00:43:08.095221 7 log.go:181] (0xc0021d9600) (0xc003601860) Stream removed, broadcasting: 5 Feb 3 00:43:08.095: INFO: Exec stderr: "" Feb 3 00:43:08.095: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:08.095: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:08.125816 7 log.go:181] (0xc0048fcb00) (0xc00404ca00) Create stream I0203 00:43:08.125854 7 log.go:181] (0xc0048fcb00) (0xc00404ca00) Stream added, broadcasting: 1 I0203 00:43:08.127686 7 log.go:181] (0xc0048fcb00) Reply frame received for 1 I0203 00:43:08.127733 7 log.go:181] (0xc0048fcb00) (0xc0010d77c0) Create stream I0203 00:43:08.127749 7 log.go:181] (0xc0048fcb00) (0xc0010d77c0) Stream added, broadcasting: 3 I0203 00:43:08.128948 7 log.go:181] (0xc0048fcb00) Reply frame received for 3 I0203 00:43:08.128978 7 log.go:181] (0xc0048fcb00) (0xc003601900) Create stream I0203 00:43:08.128990 7 log.go:181] (0xc0048fcb00) (0xc003601900) Stream added, broadcasting: 5 I0203 00:43:08.129722 7 log.go:181] (0xc0048fcb00) Reply frame received for 5 I0203 00:43:08.192269 7 log.go:181] (0xc0048fcb00) Data frame received for 3 I0203 00:43:08.192358 7 log.go:181] (0xc0010d77c0) (3) Data frame handling I0203 00:43:08.192386 7 log.go:181] (0xc0010d77c0) (3) Data frame sent I0203 00:43:08.192419 7 log.go:181] (0xc0048fcb00) Data frame received for 3 I0203 00:43:08.192442 7 log.go:181] (0xc0010d77c0) (3) Data frame handling I0203 00:43:08.192462 7 log.go:181] (0xc0048fcb00) Data frame received for 5 I0203 00:43:08.192476 7 log.go:181] (0xc003601900) (5) Data frame handling I0203 00:43:08.193751 7 log.go:181] (0xc0048fcb00) Data frame received for 1 I0203 00:43:08.193798 7 log.go:181] (0xc00404ca00) (1) Data frame handling I0203 00:43:08.193815 7 log.go:181] (0xc00404ca00) (1) Data frame sent I0203 00:43:08.193829 7 log.go:181] (0xc0048fcb00) (0xc00404ca00) Stream removed, broadcasting: 1 I0203 00:43:08.193931 7 log.go:181] (0xc0048fcb00) (0xc00404ca00) Stream removed, broadcasting: 1 I0203 00:43:08.193952 7 log.go:181] (0xc0048fcb00) (0xc0010d77c0) Stream removed, broadcasting: 3 I0203 00:43:08.193963 7 log.go:181] (0xc0048fcb00) (0xc003601900) Stream removed, broadcasting: 5 Feb 3 00:43:08.193: INFO: Exec stderr: "" Feb 3 00:43:08.194: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:08.194: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:08.194106 7 log.go:181] (0xc0048fcb00) Go away received I0203 00:43:08.223106 7 log.go:181] (0xc005d4d080) (0xc003601b80) Create stream I0203 00:43:08.223137 7 log.go:181] (0xc005d4d080) (0xc003601b80) Stream added, broadcasting: 1 I0203 00:43:08.225399 7 log.go:181] (0xc005d4d080) Reply frame received for 1 I0203 00:43:08.225439 7 log.go:181] (0xc005d4d080) (0xc00404caa0) Create stream I0203 00:43:08.225454 7 log.go:181] (0xc005d4d080) (0xc00404caa0) Stream added, broadcasting: 3 I0203 00:43:08.226386 7 log.go:181] (0xc005d4d080) Reply frame received for 3 I0203 00:43:08.226429 7 log.go:181] (0xc005d4d080) (0xc002efcfa0) Create stream I0203 00:43:08.226444 7 log.go:181] (0xc005d4d080) (0xc002efcfa0) Stream added, broadcasting: 5 I0203 00:43:08.227277 7 log.go:181] (0xc005d4d080) Reply frame received for 5 I0203 00:43:08.301588 7 log.go:181] (0xc005d4d080) Data frame received for 5 I0203 00:43:08.301631 7 log.go:181] (0xc002efcfa0) (5) Data frame handling I0203 00:43:08.301660 7 log.go:181] (0xc005d4d080) Data frame received for 3 I0203 00:43:08.301675 7 log.go:181] (0xc00404caa0) (3) Data frame handling I0203 00:43:08.301691 7 log.go:181] (0xc00404caa0) (3) Data frame sent I0203 00:43:08.301703 7 log.go:181] (0xc005d4d080) Data frame received for 3 I0203 00:43:08.301729 7 log.go:181] (0xc00404caa0) (3) Data frame handling I0203 00:43:08.302885 7 log.go:181] (0xc005d4d080) Data frame received for 1 I0203 00:43:08.302907 7 log.go:181] (0xc003601b80) (1) Data frame handling I0203 00:43:08.302933 7 log.go:181] (0xc003601b80) (1) Data frame sent I0203 00:43:08.302955 7 log.go:181] (0xc005d4d080) (0xc003601b80) Stream removed, broadcasting: 1 I0203 00:43:08.303035 7 log.go:181] (0xc005d4d080) Go away received I0203 00:43:08.303059 7 log.go:181] (0xc005d4d080) (0xc003601b80) Stream removed, broadcasting: 1 I0203 00:43:08.303090 7 log.go:181] (0xc005d4d080) (0xc00404caa0) Stream removed, broadcasting: 3 I0203 00:43:08.303109 7 log.go:181] (0xc005d4d080) (0xc002efcfa0) Stream removed, broadcasting: 5 Feb 3 00:43:08.303: INFO: Exec stderr: "" Feb 3 00:43:08.303: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-753 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 3 00:43:08.303: INFO: >>> kubeConfig: /root/.kube/config I0203 00:43:08.350022 7 log.go:181] (0xc005d4d760) (0xc003601f40) Create stream I0203 00:43:08.350066 7 log.go:181] (0xc005d4d760) (0xc003601f40) Stream added, broadcasting: 1 I0203 00:43:08.352111 7 log.go:181] (0xc005d4d760) Reply frame received for 1 I0203 00:43:08.352150 7 log.go:181] (0xc005d4d760) (0xc0074bb400) Create stream I0203 00:43:08.352165 7 log.go:181] (0xc005d4d760) (0xc0074bb400) Stream added, broadcasting: 3 I0203 00:43:08.353361 7 log.go:181] (0xc005d4d760) Reply frame received for 3 I0203 00:43:08.353409 7 log.go:181] (0xc005d4d760) (0xc0074bbb80) Create stream I0203 00:43:08.353425 7 log.go:181] (0xc005d4d760) (0xc0074bbb80) Stream added, broadcasting: 5 I0203 00:43:08.354145 7 log.go:181] (0xc005d4d760) Reply frame received for 5 I0203 00:43:08.434891 7 log.go:181] (0xc005d4d760) Data frame received for 3 I0203 00:43:08.434936 7 log.go:181] (0xc0074bb400) (3) Data frame handling I0203 00:43:08.434956 7 log.go:181] (0xc0074bb400) (3) Data frame sent I0203 00:43:08.434969 7 log.go:181] (0xc005d4d760) Data frame received for 3 I0203 00:43:08.434978 7 log.go:181] (0xc0074bb400) (3) Data frame handling I0203 00:43:08.435011 7 log.go:181] (0xc005d4d760) Data frame received for 5 I0203 00:43:08.435032 7 log.go:181] (0xc0074bbb80) (5) Data frame handling I0203 00:43:08.436245 7 log.go:181] (0xc005d4d760) Data frame received for 1 I0203 00:43:08.436267 7 log.go:181] (0xc003601f40) (1) Data frame handling I0203 00:43:08.436292 7 log.go:181] (0xc003601f40) (1) Data frame sent I0203 00:43:08.436325 7 log.go:181] (0xc005d4d760) (0xc003601f40) Stream removed, broadcasting: 1 I0203 00:43:08.436369 7 log.go:181] (0xc005d4d760) Go away received I0203 00:43:08.436407 7 log.go:181] (0xc005d4d760) (0xc003601f40) Stream removed, broadcasting: 1 I0203 00:43:08.436423 7 log.go:181] (0xc005d4d760) (0xc0074bb400) Stream removed, broadcasting: 3 I0203 00:43:08.436442 7 log.go:181] (0xc005d4d760) (0xc0074bbb80) Stream removed, broadcasting: 5 Feb 3 00:43:08.436: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:43:08.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-753" for this suite. • [SLOW TEST:11.295 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":307,"skipped":5275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:43:08.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Feb 3 00:43:08.536: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a2eeb8c7-d243-4eeb-ba3e-a37121005989" in namespace "security-context-test-8650" to be "Succeeded or Failed" Feb 3 00:43:08.593: INFO: Pod "busybox-privileged-false-a2eeb8c7-d243-4eeb-ba3e-a37121005989": Phase="Pending", Reason="", readiness=false. Elapsed: 57.463409ms Feb 3 00:43:10.599: INFO: Pod "busybox-privileged-false-a2eeb8c7-d243-4eeb-ba3e-a37121005989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063038424s Feb 3 00:43:12.603: INFO: Pod "busybox-privileged-false-a2eeb8c7-d243-4eeb-ba3e-a37121005989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0676665s Feb 3 00:43:12.604: INFO: Pod "busybox-privileged-false-a2eeb8c7-d243-4eeb-ba3e-a37121005989" satisfied condition "Succeeded or Failed" Feb 3 00:43:12.621: INFO: Got logs for pod "busybox-privileged-false-a2eeb8c7-d243-4eeb-ba3e-a37121005989": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:43:12.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8650" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":308,"skipped":5342,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 3 00:43:12.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-c034a898-754e-4841-be9d-a63410498186 STEP: Creating a pod to test consume configMaps Feb 3 00:43:12.744: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c" in namespace "projected-2451" to be "Succeeded or Failed" Feb 3 00:43:12.748: INFO: Pod "pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.37114ms Feb 3 00:43:14.793: INFO: Pod "pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048801854s Feb 3 00:43:16.796: INFO: Pod "pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051950119s Feb 3 00:43:18.801: INFO: Pod "pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05638787s STEP: Saw pod success Feb 3 00:43:18.801: INFO: Pod "pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c" satisfied condition "Succeeded or Failed" Feb 3 00:43:18.804: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c container agnhost-container: STEP: delete the pod Feb 3 00:43:18.840: INFO: Waiting for pod pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c to disappear Feb 3 00:43:18.852: INFO: Pod pod-projected-configmaps-7198c8e4-961c-4abf-a564-e7f0f28acf0c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 3 00:43:18.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2451" for this suite. • [SLOW TEST:6.232 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":309,"skipped":5343,"failed":0} SSSSSSSSSSSSSSSFeb 3 00:43:18.863: INFO: Running AfterSuite actions on all nodes Feb 3 00:43:18.864: INFO: Running AfterSuite actions on node 1 Feb 3 00:43:18.864: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":309,"completed":309,"skipped":5358,"failed":0} Ran 309 of 5667 Specs in 7825.555 seconds SUCCESS! -- 309 Passed | 0 Failed | 0 Pending | 5358 Skipped PASS